title
stringlengths
8
300
abstract
stringlengths
0
10k
Group therapy for women with substance use disorders: results from the Women's Recovery Group Study.
BACKGROUND This Stage II trial builds on a Stage I trial comparing the single-gender Women's Recovery Group (WRG) to mixed-gender Group Drug Counseling (GDC) that demonstrated preliminary support for the WRG in treating women with substance use disorders. The Stage II trial aims were to (1) investigate effectiveness of the WRG relative to GDC in a sample of women heterogeneous with respect to substance of abuse and co-occurring psychiatric disorders, and (2) demonstrate the feasibility of implementing WRG in an open-enrollment group format at two sites. METHOD In this randomized clinical trial, participants were included if they were substance dependent and had used substances within the past 60 days (n=158). Women were randomized to WRG (n=52) or GDC (n=48); men were assigned to GDC (n=58). Substance use outcomes were assessed at months 1-6 and 9. RESULTS Women in both the WRG and GDC had reductions in mean number of substance use days during treatment (12.7 vs 13.7 day reductions for WRG and GDC, respectively) and 6 months post-treatment (10.3 vs 12.7 day reductions); however, there were no significant differences between groups. CONCLUSIONS The WRG demonstrated comparable effectiveness to standard mixed-gender treatment (i.e., GDC) and is feasibly delivered in an open-group format typical of community treatment. It provides a manual-based group therapy with women-focused content that can be implemented in a variety of clinical settings for women who are heterogeneous with respect to their substance of abuse, other co-occurring psychiatric disorders, and life-stage.
Neurocognitive start-up tools for symbolic number representations
Attaching meaning to arbitrary symbols (i.e. words) is a complex and lengthy process. In the case of numbers, it was previously suggested that this process is grounded on two early pre-verbal systems for numerical quantification: the approximate number system (ANS or 'analogue magnitude'), and the object tracking system (OTS or 'parallel individuation'), which children are equipped with before symbolic learning. Each system is based on dedicated neural circuits, characterized by specific computational limits, and each undergoes a separate developmental trajectory. Here, I review the available cognitive and neuroscientific data and argue that the available evidence is more consistent with a crucial role for the ANS, rather than for the OTS, in the acquisition of abstract numerical concepts that are uniquely human.
Trainable Sentence Planning for Complex Information Presentations in Spoken Dialog Systems
A challenging problem for spoken dialog systems is the design of utterance generation modules that are fast, flexible and general, yet produce high quality output in particular domains. A promising approach is trainable generation, which uses general-purpose linguistic knowledge automatically adapted to the application domain. This paper presents a trainable sentence planner for the MATCH dialog system. We show that trainable sentence planning can produce output comparable to that of MATCH’s template-based generator even for quite complex information presentations.
Dual strip weaving: interactive design of quad layouts using elastica strips
We introduce Dual Strip Weaving, a novel concept for the interactive design of quad layouts, i.e. partitionings of freeform surfaces into quadrilateral patch networks. In contrast to established tools for the design of quad layouts or subdivision base meshes, which are often based on creating individual vertices, edges, and quads, our method takes a more global perspective, operating on a higher level of abstraction: the atomic operation of our method is the creation of an entire cyclic strip, delineating a large number of quad patches at once. The global consistency-preserving nature of this approach reduces demands on the user's expertise by requiring less advance planning. Efficiency is achieved using a novel method at the heart of our system, which automatically proposes geometrically and topologically suitable strips to the user. Based on this we provide interaction tools to influence the design process to any desired degree and visual guides to support the user in this task.
Ischemic heart disease and primary care: identifying gender-related differences. An observational study
BACKGROUND Gender-related differences are seen in multiple aspects of both health and illness. Ischemic heart disease (IHD) is a pathology in which diagnostic, treatment and prognostic differences are seen between sexes, especially in the acute phase and in the hospital setting. The objective of the present study is to analyze whether there are differences between men and women when examining associated cardiovascular risk factors and secondary pharmacological prevention in the primary care setting. METHODS Retrospective descriptive observational study from January to December of 2006, including 1907 patients diagnosed with ischemic heart disease in the city of Lleida, Spain. The clinical data were obtained from computerized medical records and pharmaceutical records of medications dispensed in pharmacies with official prescriptions. Data was analyzed using bivariate descriptive statistical analysis as well as logistic regression. RESULTS There were no gender-related differences in screening percentages for arterial hypertension, diabetes, obesity, dyslipemia, and smoking. A greater percentage of women were hypertensive, obese and diabetic compared to men. However, men showed a tendency to achieve control targets more easily than women, with no statistically significant differences. In both sexes cardiovascular risk factors control was inadequate, between 10 and 50%. For secondary pharmaceutical prevention, the percentages of prescriptions were greater in men for anticoagulants, beta-blockers, lipid-lowering agents and angiotensin-converting enzyme inhibitors/angiotensin II receptor blockers, with age group variations up to 10%. When adjusting by age and specific diagnoses, differences were maintained for anticoagulants and lipid-lowering agents. CONCLUSION Screening of cardiovascular risk factors was similar in men and women with IHD. Although a greater percentage of women were hypertensive, diabetic or obese, their management of risk factors tended to be worse than men. Overall, a poor control of cardiovascular risk factors was noted. Taken as a whole, more men were prescribed secondary prevention drugs, with differences varying by age group and IHD diagnosis.
Cognitive support by language visualization: A case study with hindi language
The main objective of this research is to give a cognitive support to a person for understanding Hindi language text using automatic text visualization (ATV). ATV is especially useful for the persons with learning disabilities (LD). This paper focuses on the background and complexity of the problem. The impact on comprehension through visualized text over reading the text is discussed. The architecture of Preksha — a Hindi text visulizer — is given and several illustrative examples of scenes generated using it are given.
The rhetoric of space: Literary and artistic representations of landscape in Republican and Augustan Rome
Horace's famous Ut pictura poesis serves as the point of departure for this book. It is a much discussed passage, in our time and in the past, because it has been subjected to widely divergent interpretations. This follows from the assumption that Horace meant pictures are like poems. Leach, however, argues he meant no such thing; instead, he compared the ways visual and literary works address themselves to the beholder, an aesthetics and semiology of reception, not expression. The subject matter of this book is ancient Roman landscape, not gardens per se, nor cities, although architectural settings (and sets) are discussed; rather, Leach explores the sort of outdoor terrain, in pictures and poems that serves as the site of events: for example, harbours, hillsides, battlefields and temple compounds as places of arrival and departure, epiphany, war and religious rite. The sources examined range from the writings of Virgil, Caesar, Lucretius, Varro, Horace, Pliny, Propertius and Ovid to the panel and fresco paintings in Rome, Naples, Pompeii and elsewhere. The wide and inclusive expanse of Leach's �landscape� makes the book very long, seemingly wandering in places and nearly intractable as a whole. The argument is best sustained at a local level when specific writings and works are considered, especially those of Virgil in the context of the Odyssey landscapes.
Open reduction and fixation of medial Moore type II fractures of the tibial plateau by a direct dorsal approach
Moore type II Entire Condyle fractures of the tibia plateau represent a rare and highly unstable fracture pattern that usually results from high impact traumas. Specific recommendations regarding the surgical treatment of these fractures are sparse. We present a series of Moore type II fractures treated by open reduction and internal fixation through a direct dorsal approach. Five patients (3 females, 2 males) with Entire Condyle fractures were retrospectively analyzed after a mean follow-up period of 39 months (range 12–61 months). Patient mean age at the time of operation was 36 years (range 26–43 years). Follow-up included clinical and radiological examination. Furthermore, all patient finished a SF36 and Lysholm knee score questionnaire. Average range of motion was 127/0/1° with all patients reaching full extension at the time of last follow up. Patients reached a mean Lysholm score of 81.2 points (range 61–100 points) and an average SF36 of 82.36 points (range 53.75–98.88 points). One patient sustained deep wound infection after elective implant removal 1 year after the initial surgery. Overall all patients were highly satisfied with the postoperative result. The direct dorsal approach to the tibial plateau represents an adequate method to enable direct fracture exposure, open reduction, and internal fixation in posterior shearing medial Entire Condyle fractures and is especially valuable when also the dorso-lateral plateau is depressed.
Wavelet-Based Neural Network for Power Disturbance Recognition and Classification
In this paper, a prototype wavelet-based neuralnetwork classifier for recognizing power-quality disturbances is implemented and tested under various transient events. The discrete wavelet transform (DWT) technique is integrated with the probabilistic neural-network (PNN) model to construct the classifier. First, the multiresolution-analysis technique of DWT and the Parseval’s theorem are employed to extract the energy distribution features of the distorted signal at different resolution levels. Then, the PNN classifies these extracted features to identify the disturbance type according to the transient duration and the energy features. Since the proposed methodology can reduce a great quantity of the distorted signal features without losing its original property, less memory space and computing time are required. Various transient events tested, such as momentary interruption, capacitor switching, voltage sag/swell, harmonic distortion, and flicker show that the classifier can detect and classify different power disturbance types efficiently.
EmotionX-Area66: Predicting Emotions in Dialogues using Hierarchical Attention Network with Sequence Labeling
This paper presents our system submitted to the EmotionX challenge. It is an emotion detection task on dialogues in the EmotionLines dataset. We formulate this as a hierarchical network where network learns data representation at both utterance level and dialogue level. Our model is inspired by Hierarchical Attention network (HAN) and uses pre-trained word embeddings as features. We formulate emotion detection in dialogues as a sequence labeling problem to capture the dependencies among labels. We report the performance accuracy for four emotions (anger, joy, neutral and sadness). The model achieved unweighted accuracy of 55.38% on Friends test dataset and 56.73% on EmotionPush test dataset. We report an improvement of 22.51% in Friends dataset and 36.04% in EmotionPush dataset over baseline results.
Impact of qigong exercise on self-efficacy and other cognitive perceptual variables in patients with essential hypertension.
OBJECTIVES The purpose of this study was to investigate the impact of practicing qigong on middle-age subjects with essential hypertension. Impacts on blood pressure, reported self-efficacy, perceived benefit, and emotion were observed. DESIGN Thirty-six (36) adult volunteers were assigned to either a waiting list control or a qigong group that practiced two 30-minute qigong programs per week over 8 consecutive weeks. RESULTS Systolic and diastolic blood pressure was significantly reduced in members of the qigong group after 8 weeks of exercise. Significant improvements in self-efficacy and other cognitive perceptual efficacy variables were also documented in the qigong group compared to the original situation described above. CONCLUSIONS This pilot study demonstrates the positive effects of practicing qigong on controlling blood pressure and enhancing perceptions of self-efficacy.
Validity of the Functional Gait Assessment in patients with Parkinson disease: construct, concurrent, and predictive validity.
BACKGROUND The Functional Gait Assessment (FGA) is a validated measurement of gait-related activities in certain populations and may be potentially useful to assess balance and gait disorders in patients with Parkinson disease (PD). OBJECTIVE The purpose of this study was to determine the construct, concurrent, and predictive validity of the FGA in inpatients with PD. DESIGN This was a prospective cohort study. METHODS One hundred twenty-one inpatients with PD were prospectively enrolled. The FGA and other relevant appraisals of gait, balance, disease severity, and activities of daily living were performed. Six months later, the patients were interviewed by telephone to have their fall information collected. Principal component analysis was used to determine construct validity. Spearman correlation coefficients were used to determine concurrent validity between the FGA and other measures. Cutoff point, sensitivity, specificity, and positive likelihood ratio were calculated for predictive validity based on the receiver operating characteristic curve. RESULTS One common factor was extracted for construct validity, which cumulatively explained 64.0% of the total variance. Correlation coefficients for the FGA compared with other measures ranged from .57 to .85. The cutoff point for predicting falls was 18, with sensitivity of 80.6%, specificity of 80.0%, and positive likelihood ratio of 4.03. LIMITATIONS This study was limited by the length of time of follow-up and self-reports of falls without the requirement of a fall diary. Medication adjustment after the FGA evaluation may have led to a different cutoff score for identifying those patients who were at risk of falling. CONCLUSIONS The FGA demonstrated good construct validity in patients with PD. It had moderate to strong correlations with other balance and gait appraisals. The FGA can be used to predict falls within the subsequent 6 months.
A Short Introduction to Boosting Yoav
Boosting is a general method for improving the accuracy of any given learning algorithm. This short overview paper introduces the boosting algorithm AdaBoost, and explains the underlying theory of boosting, including an explanation of why boosting often does not suffer from overfitting as well as boosting’s relationship to support-vector machines. Some examples of recent applications of boosting are also described.
Security Architecture for Cloud Computing Environments
Cloud computing is becoming very popular computing paradigm for network applications in open distributed environments. In essence, the idea is to host various application servers in a virtual network environment (“cloud”) and offer their use through the concept of (Web) and other services. Contrary to classical network applications approach in the form of client–server model, in a cloud environment users do not access individual application servers, do not establish direct connections with them, do not send request messages directly to those servers, and do not receive direct replies from them. Instead, clients access those application servers through cloud access proxies, special servers that perform publishing and exporting various (usually Web) services available in a cloud.
Review: Knowledge Management and Knowledge Management Systems: Conceptual Foundations and Research Issues
Knowledge is a broad and abstract notion that has defined epistemological debate in western philosophy since the classical Greek era. In the past Richard Watson was the accepting senior editor for this paper. MISQ Review articles survey, conceptualize, and synthesize prior MIS research and set directions for future research. For more details see http://www.misq.org/misreview/announce.html few years, however, there has been a growing interest in treating knowledge as a significant organizational resource. Consistent with the interest in organizational knowledge and knowledge management (KM), IS researchers have begun promoting a class of information systems, referred to as knowledge management systems (KMS). The objective of KMS is to support creation, transfer, and application of knowledge in organizations. Knowledge and knowledge management are complex and multi-faceted concepts. Thus, effective development and implementation of KMS requires a foundation in several rich
Teacher consultation and coaching within mental health practice: classroom and child effects in urban elementary schools.
OBJECTIVE To examine effects of a teacher consultation and coaching program delivered by school and community mental health professionals on change in observed classroom interactions and child functioning across one school year. METHOD Thirty-six classrooms within 5 urban elementary schools (87% Latino, 11% Black) were randomly assigned to intervention (training + consultation/coaching) and control (training only) conditions. Classroom and child outcomes (n = 364; 43% girls) were assessed in the fall and spring. RESULTS Random effects regression models showed main effects of intervention on teacher-student relationship closeness, academic self-concept, and peer victimization. Results of multiple regression models showed levels of observed teacher emotional support in the fall moderated intervention impact on emotional support at the end of the school year. CONCLUSIONS Results suggest teacher consultation and coaching can be integrated within existing mental health activities in urban schools and impact classroom effectiveness and child adaptation across multiple domains.
REM: A Collaborative Framework for Building Indigenous Cultural Competence.
The well-documented health disparities between the Australian Indigenous and non-Indigenous population mandates a comprehensive response from health professionals. This article outlines the approach taken by one faculty of health in a large urban Australian university to enhance cultural competence in students from a variety of fields. Here we outline a collaborative and deeply respectful process of Indigenous and non-Indigenous university staff collectively developing a model that has framed the embedding of a common faculty Indigenous graduate attribute across the curriculum. Through collaborative committee processes, the development of the principles of "Respect; Engagement and sharing; Moving forward" (REM) has provided both a framework and way of "being and doing" our work. By drawing together the recurring principles and qualities that characterize Indigenous cultural competence the result will be students and staff learning and bringing into their lives and practice, important Indigenous cultural understanding.
Dual Low-Rank Decompositions for Robust Cross-View Learning
Cross-view data are very popular contemporarily, as different viewpoints or sensors attempt to richly represent data in various views. However, the cross-view data from different views present a significant divergence, that is, cross-view data from the same category have a lower similarity than those in different categories but within the same view. Considering that each cross-view sample is drawn from two intertwined manifold structures, i.e., class manifold and view manifold, in this paper, we propose a robust cross-view learning framework to seek a robust view-invariant low-dimensional space. Specifically, we develop a dual low-rank decomposition technique to unweave those intertwined manifold structures from one another in the learned space. Moreover, we design two discriminative graphs to constrain the dual low-rank decompositions by fully exploring the prior knowledge. Thus, our proposed algorithm is able to capture more within-class knowledge and mitigate the view divergence to obtain a more effective view-invariant feature extractor. Furthermore, our proposed method is very flexible in addressing such a challenging cross-view learning scenario that we only obtain the view information of the training data while with the view information of the evaluation data unknown. Experiments on face and object benchmarks demonstrate the effective performance of our designed model over the state-of-the-art algorithms.
CFD Analysis of Axial Flow Fans with Skewed Blades
The present work deals with axial flow fans that are primarily used for providing the required airflow for heat and mass transfer operations in the various industrial equipment and processes. It can be used for cooling, ventilation or for drying purposes. These include cooling towers for air-conditioning and ventilation, humidifiers in textile mills, air heat exchangers for various chemical processes, ventilation and exhaust as in mining industry. In consideration of all these application various efforts in the past have been done to evaluate the performance of axial flow fans. In the present work computational investigation of axial flow fan with Forward and Backward Skewed blade profile carried out using CFD software FLUENT 6.3 and the results are compared with the experimental results from literature. The CFD analysis is done by modeling the axial fan in GAMBIT 2.2 and using Standard k-є model with the Standard wall function for modeling turbulence. The analysis is carried out with blade stagger angle of 25, Skewed angle of 8.3 and at 1440 rpm and 1800 rpm. The aim is to analyze fan with these two types of profiles and compare the static pressure, flow rate, flow coefficient and pressure coefficient generated by fan and hence finding the efficiency of an axial flow fan. After carrying out computational investigation it is found that computational results holds good agreement with the experimental results taken from the literature. Keywords— Axial flow fan, skewed angle, flow rate, flow coefficient, pressure coefficient, CFD.
Road Lane Detection by Discriminating Dashed and Solid Road Lanes Using a Visible Light Camera Sensor
With the increasing need for road lane detection used in lane departure warning systems and autonomous vehicles, many studies have been conducted to turn road lane detection into a virtual assistant to improve driving safety and reduce car accidents. Most of the previous research approaches detect the central line of a road lane and not the accurate left and right boundaries of the lane. In addition, they do not discriminate between dashed and solid lanes when detecting the road lanes. However, this discrimination is necessary for the safety of autonomous vehicles and the safety of vehicles driven by human drivers. To overcome these problems, we propose a method for road lane detection that distinguishes between dashed and solid lanes. Experimental results with the Caltech open database showed that our method outperforms conventional methods.
Latin American Poetry: Origins and Presence
Acknowledgements 1. Introduction 2. Vernacular American Map of Latin America 3. The great song of America 4. Modernism and Ruben Dario 5. Brazilian Modernism 6. Precedent, self and communal self: Vallejo and Neruda 7. The traditions of Octavio Paz 8. Modern priorities Notes Bibliography Index.
18–40-GHz Beam-Shaping/Steering Phased Antenna Array System Using Fermi Antenna
This paper concerns 18-40 GHz 1times 16 beam shaping and 1times 8 beam steering phased antenna arrays (PAAs) realized on a single low-cost printed circuit board substrate. The system consists of a wideband power divider with amplitude taper for sidelobe suppression, wideband microstrip-to-slotline transition, a low-cost true time piezoelectric transducer (PET)-controlled phase shifter, and wideband Fermi antennas with corrugations along the sides. A coplanar stripline is used under a PET-controlled phase shifter, which can generate 50% more phase shift compared to the perturbation on microstrip lines previously published. The systems are fabricated using electro-fine-forming microfabrication technology. Measured return loss is better than 10 dB from 18 to 40 GHz for both the beam-shaping and beam-steering PAAs. The beam-shaping PAA has a 12deg 3-dB beamwidth broadening range. The sidelobe ratios (SLRs) are 27, 23, and 20 dB at 20, 30, and 40 GHz, respectively, without perturbation. The SLRs are 20, 16, and 15 dB at 20, 30, and 40 GHz with maximum perturbation. The beam-steering PAA has a 36deg (-17deg to +19deg ) beam-scanning range measured at 30 GHz.
Reduction of Unbalanced Axial Magnetic Force in Postfault Operation of a Novel Six-Phase Double-Stator Axial-Flux PM Machine Using Model Predictive Control
This paper investigates the postfault operation of a novel six-phase double-stator axial-flux permanent-magnet machine with detached winding connection. In previous research, this configuration was found to be superior to existing winding connection except that its unbalanced magnetic force in the postfault operation cannot be ignored. In this paper, an axial magnetic force balancing method (AMFBM) is proposed to reduce unbalanced magnetic force by deducing a set of special winding current. By comparing electromagnetic torque and axial magnetic force of the traditional winding current and the proposed method in postfault operations through a finite-element analysis, it is verified that AMFBM can reduce most of the unbalanced axial magnetic force as well as keeping torque ripple at a low level. In order to realize the AMFBM, finite control set model predictive control is adopted. A postfault model of dual-three-phase permanent-magnet machines with modified vector space decomposition method is first brought forward to predict the future behavior of the controlled variables with various voltage inputs. After that, a cost function is designed to track the desired winding current by evaluating all the predictions to decide the next step of the inverter. Experimental results show that the control scheme performs well in both dynamic and steady-state situations.
CodeHint: dynamic and interactive synthesis of code snippets
There are many tools that help programmers find code fragments, but most are inexpressive and rely on static information. We present a new technique for synthesizing code that is dynamic (giving accurate results and allowing programmers to reason about concrete executions), easy-to-use (supporting a wide range of correctness specifications), and interactive (allowing users to refine the candidate code snippets). Our implementation, which we call CodeHint, generates and evaluates code at runtime and hence can synthesize real-world Java code that involves I/O, reflection, native calls, and other advanced language features. We have evaluated CodeHint in two user studies and show that its algorithms are efficient and that it improves programmer productivity by more than a factor of two.
The emotional brain
The discipline of affective neuroscience is concerned with the neural bases of emotion and mood. The past 30 years have witnessed an explosion of research in affective neuroscience that has addressed questions such as: which brain systems underlie emotions? How do differences in these systems relate to differences in the emotional experience of individuals? Do different regions underlie different emotions, or are all emotions a function of the same basic brain circuitry? How does emotion processing in the brain relate to bodily changes associated with emotion? And, how does emotion processing in the brain interact with cognition, motor behaviour, language and motivation?
Vehicle logo recognition in traffic images using HOG features and SVM
In this paper a new vehicle logo recognition approach is presented using Histograms of Oriented Gradients (HOG) and Support Vector Machines (SVM). The system is specifically devised to work with images supplied by traffic cameras where the logos appear with low resolution. A sliding-window technique combined with a majority voting scheme are used to provide the estimated car manufacturer. The proposed approach is assessed on a set of 3.579 vehicle images, captured by two different traffic cameras that belong to 27 distinctive vehicle manufacturers. The reported results show an overall recognition rate of 92.59%, which supports the use of the system for real applications.
Neuro-dynamic programming
What should you think more? Time to get this [PDF? It is easy then. You can only sit and stay in your place to get this book. Why? It is on-line book store that provide so many collections of the referred books. So, just with internet connection, you can enjoy downloading this book and numbers of books that are searched for now. By visiting the link page download that we have provided, the book that you refer so much can be found. Just save the requested book downloaded and then you can enjoy the book to read every time and place you want.
Panoptic Studio: A Massively Multiview System for Social Motion Capture
We present an approach to capture the 3D structure and motion of a group of people engaged in a social interaction. The core challenges in capturing social interactions are: (1) occlusion is functional and frequent, (2) subtle motion needs to be measured over a space large enough to host a social group, and (3) human appearance and configuration variation is immense. The Panoptic Studio is a system organized around the thesis that social interactions should be measured through the perceptual integration of a large variety of view points. We present a modularized system designed around this principle, consisting of integrated structural, hardware, and software innovations. The system takes, as input, 480 synchronized video streams of multiple people engaged in social activities, and produces, as output, the labeled time-varying 3D structure of anatomical landmarks on individuals in the space. The algorithmic contributions include a hierarchical approach for generating skeletal trajectory proposals, and an optimization framework for skeletal reconstruction with trajectory re-association.
Variational Auto-encoded Deep Gaussian Processes
We develop a scalable deep non-parametric generative model by augmenting deep Gaussian processes with a recognition model. Inference is performed in a novel scalable variational framework where the variational posterior distributions are reparametrized through a multilayer perceptron. The key aspect of this reformulation is that it prevents the proliferation of variational parameters which otherwise grow linearly in proportion to the sample size. We derive a new formulation of the variational lower bound that allows us to distribute most of the computation in a way that enables to handle datasets of the size of mainstream deep learning tasks. We show the efficacy of the method on a variety of challenges including deep unsupervised learning and deep Bayesian optimization.
Colour and its effects in interior environment : a review
Article history: Received Accepted Available online 02 Sept. 2013 30 Sept. 2013 07 Oct. 2013 Colour is an inseparable as well as an important aspect of an interior design. The maximum influence in interior comes with the design of colour. So it is very important to study the colour and its effect in interior environment, it may be physiological as well as psychological. For this, articles were reviewed and analyzed from the existing literature, related to use of colour in both residence as well as commercial interior. The three major areas reviewed were (1) Psychological and physiological effect of colour (2) Meaning of Warm, Cool and Neutral Colour (3) Effect of Colour in forms. The results show that colour is important in designing functional spaces. The results of this analysis may benefit to architects, interior designer, and homeowner to use colour effectively in interior environment. © 2013 International Journal of Advanced Research in Science and Technology (IJARST). All rights reserved.
Cohabiting family members share microbiota with one another and with their dogs
Human-associated microbial communities vary across individuals: possible contributing factors include (genetic) relatedness, diet, and age. However, our surroundings, including individuals with whom we interact, also likely shape our microbial communities. To quantify this microbial exchange, we surveyed fecal, oral, and skin microbiota from 60 families (spousal units with children, dogs, both, or neither). Household members, particularly couples, shared more of their microbiota than individuals from different households, with stronger effects of co-habitation on skin than oral or fecal microbiota. Dog ownership significantly increased the shared skin microbiota in cohabiting adults, and dog-owning adults shared more 'skin' microbiota with their own dogs than with other dogs. Although the degree to which these shared microbes have a true niche on the human body, vs transient detection after direct contact, is unknown, these results suggest that direct and frequent contact with our cohabitants may significantly shape the composition of our microbial communities. DOI:http://dx.doi.org/10.7554/eLife.00458.001.
Implementation of 3D Obstacle Compliant Mobility Models for UAV Networks in ns-3
UAV networks are envisioned to play a crucial role in the future generations of wireless networks. The mechanical degrees of freedom in the movement of UAVs provides various advantages for tactical and civilian applications. Due to the high cost of failures in system-based tests, initial analysis and refinement of designs and algorithms for UAV applications are performed through rigorous simulations. Current trend of UAV specific simulators is mainly biased towards the mechanical properties of flying. For network-centric simulations, the intended measurements on the performance of protocols in mobile scenarios are conventionally captured from general-purpose network simulators, which are not natively equipped with comprehensive models for 3D movements of UAVs. To facilitate such simulations for UAV systems, this paper presents different mobility models for emulation of the movement of a UAV. Detailed description of three mobility models (random walk, random direction, and Gauss-Markov) are presented, and their associated movement patterns are characterized. This characterization is further extended by considering the effect of large obstacles on movement patterns of nodes following the three models. The mobility models are prepared as open-source add-ons for ns-3 network simulator.
Morphological Operations and Projection Profiles based Segmentation of Handwritten Kannada Document
Segmentation is an important task of any Optical Character Recognition (OCR) system. It separates the image text documents into lines, words and characters. The accuracy of OCR system mainly depends on the segmentation algorithm being used. Segmentation of handwritten text of some Indian languages like Kannada, Telugu, Assamese is difficult when compared with Latin based languages because of its structural complexity and increased character set. It contains vowels, consonants and compound characters. Some of the characters may overlap together. Despite several successful works in OCR all over the world, development of OCR tools in Indian languages is still an ongoing process. Character segmentation plays an important role in character recognition because incorrectly segmented characters are unlikely to be recognized correctly. In this paper, a segmentation scheme for segmenting handwritten Kannada scripts into lines, words and characters using morphological operations and projection profiles is proposed. The method was tested on totally unconstrained handwritten Kannada scripts, which pays more challenge and difficulty due to the complexity involved in the script. Usage of the morphology made extracting text lines efficient by an average extraction rate of 94.5% .Because of the varying inter and intra word gaps an average segmentation rate of 82.35% and 73.08% for words and characters respectively is obtained.
A 3.4 – 6.2 GHz Continuously tunable electrostatic MEMS resonator with quality factor of 460–530
In this paper we present the first MEMS electrostatically-tunable loaded-cavity resonator that simultaneously achieves a very high continuous tuning range of 6.2 GHz:3.4 GHz (1.8:1) and quality factor of 460–530 in a volume of 18×30×4 mm3 including the actuation scheme and biasing lines. The operating principle relies on tuning the capacitance of the loaded-cavity by controlling the gap between an electrostatically-actuated membrane and the cavity post underneath it. Particular attention is paid on the fabrication of the tuning mechanism in order to avoid a) quality factor degradation due to the biasing lines and b) hysteresis and creep issues. A single-crystal silicon membrane coated with a thin gold layer is the key to the success of the design.
Chorus: a crowd-powered conversational assistant
Despite decades of research attempting to establish conversational interaction between humans and computers, the capabilities of automated conversational systems are still limited. In this paper, we introduce Chorus, a crowd-powered conversational assistant. When using Chorus, end users converse continuously with what appears to be a single conversational partner. Behind the scenes, Chorus leverages multiple crowd workers to propose and vote on responses. A shared memory space helps the dynamic crowd workforce maintain consistency, and a game-theoretic incentive mechanism helps to balance their efforts between proposing and voting. Studies with 12 end users and 100 crowd workers demonstrate that Chorus can provide accurate, topical responses, answering nearly 93% of user queries appropriately, and staying on-topic in over 95% of responses. We also observed that Chorus has advantages over pairing an end user with a single crowd worker and end users completing their own tasks in terms of speed, quality, and breadth of assistance. Chorus demonstrates a new future in which conversational assistants are made usable in the real world by combining human and machine intelligence, and may enable a useful new way of interacting with the crowds powering other systems.
Education, Signaling and Mismatch
We assess the importance education as a signal of workers’skills and the e¤ects of poor signaling quality on labor market outcomes. We do so by merging a frictional labor market model with a signaling setup where there is a privately observed idiosyncratic component in the cost of education. Given that highly skilled workers cannot correctly signal their abilities, their wages will be lower and they will not be matched to the "right" vacancies, or may be unemployed. Skilled workers will then have lower incentives to move to high productivity markets. Furthermore, fewer vacancies will be created in labor markets where skills matter, and incentives for workers to invest in education will be lower. Overall, an economy where education is a noisier signal generates lower educational attainment, higher unemployment and lower productivity. In addition, we provide evidence suggesting that education plays a poor signaling role in Latin American countries. We then calibrate our model using Peruvian data, and through a quantitative exercise we show that this mechanism could be relevant to explain the relatively bad performance of labor markets in Latin American countries. 1E-mail: [email protected], hru¤[email protected]. We are grateful to Fernando Álvarez Parra for his comments and suggestions.
The Theory and Practice of Culturally Relevant Education : A Synthesis of Research Across Content Areas
Many teachers and educational researchers have claimed to adopt tenets of culturally relevant education (CRE). However, recent work describes how standardized curricula and testing have marginalized CRE in educational reform discourses. In this synthesis of research, we sought examples of research connecting CRE to positive student outcomes across content areas. It is our hope that this synthesis will be a reference useful to educational researchers, parents, teachers, and education leaders wanting to reframe public debates in education away from neoliberal individualism, whether in a specific content classroom or in a broader educational community.
Multi-scale volumes for deep object detection and localization
This study aims to analyze the benefits of improved multiscale reasoning for object detection and localization with deep convolutional neural networks. To that end, an efficient and general object detection framework which operates on scale volumes of a deep feature pyramid is proposed. In contrast to the proposed approach, most current state-of-the-art object detectors operate on a single-scale in training, while testing involves independent evaluation across scales. One benefit of the proposed approach is in better capturing of multi-scale contextual information, resulting in significant gains in both detection performance and localization quality of objects on the PASCAL VOC dataset and a multi-view highway vehicles dataset. The joint detection and localization scale-specific models are shown to especially benefit detection of challenging object categories which exhibit large scale variation as well as detection of small objects.
Would You Barter with God? Why Holy Debts and Not Profane Markets Created Money
Attempting to revitalize the substantive approach to economics in the tradition of K. Polanyi, this paper revives the neglected substantive theory of money's origins by Bernhard Laum and thus disputes the formal approaches that see the origins of money in the context of trade. A wide range of evidence, from archeological to etymological, is utilized to demonstrate that relations between men and God, carried out through the intermediary of state-religious authorities, played a causal role in the genesis of the ox-unit of value and account, and, further, in the origins of money, and, subsequently, coinage. The substantive state-religious approach presented in this paper is also compared and contrasted to the Chartalist perspective on money's origins. It is concluded that the substantive approach presented in this paper differs from the more formal approaches (e.g., Metallism) because it does not rely upon a projection of modern institutions and habits of thought (e.g., a medium of exchange; monetary taxation) into ancient societies.
Ensemble of extreme learning machine for landslide displacement prediction based on time series analysis
Landslide hazard is a complex nonlinear dynamical system with uncertainty. The evolution of landslide is influenced by many factors such as tectonic, rainfall and reservoir level fluctuation. Using a time series model, total accumulative displacement of landslide can be divided into the trend component displacement and the periodic component displacement according to the response relation between dynamic changes in landslide displacement and inducing factors. In this paper, a novel neural network technique called ensemble of extreme learning machine (E-ELM) is proposed to investigate the interactions of different inducing factors affecting the evolution of landslide. Grey relational analysis is used to sieve out the more influential inducing factors as the inputs in E-ELM. Trend component displacement and periodic component displacement are forecasted, respectively; then, total predictive displacement is obtained by adding the calculated predictive displacement value of each sub. Performances of our model are evaluated by using real data from Baishuihe landslide in the Three Gorges Reservoir of China, and it provides a good representation of the measured slide displacement behavior.
Pressure recording analytical method and bioreactance for stroke volume index monitoring during pediatric cardiac surgery.
BACKGROUND It is currently uncertain which hemodynamic monitoring device reliably measures stroke volume and tracks cardiac output changes in pediatric cardiac surgery patients. OBJECTIVE To evaluate the difference between stroke volume index (SVI) measured by pressure recording analytical method (PRAM) and bioreactance and their ability to track changes after a therapeutic intervention. METHODS A single-center prospective observational cohort study in children undergoing cardiac surgery with cardiopulmonary bypass (CPB) was conducted. Twenty children below 20 kg with median (interquartile range) weight of 5.3 kg (4.1-7.8) and age of 6 months (3-20) were enrolled. Data were collected after anesthesia induction, at the end of CPB, before fluid administration and after fluid administration. Overall, median-IQR PRAM SVI values (23 ml·m(-2), 19-27) were significantly higher than bioreactance SVI (15 ml·m(-2), 12-25, P = 0.0001). Correlation (r(2) ) between the two methods was 0.15 (P = 0.0003). The mean difference between the measurements (bias) was 5.7 ml·m(-2) with a standard deviation of 9.6 (95% limits of agreement ranged from -13 to 24 ml·m(-2)). Percentage error was 91.7%. Baseline SVI appeared to be similar, but PRAM SVI was systematically greater than bioreactance thereafter, with the highest gap after the fluid loading phase: 13 (12-18) ml·m(-2) vs. 23 (19-25) ml·m(-2), respectively, P = 0.0013. A multivariable regression model showed that a significant independent inverse correlation with patients' body weight predicted the CI difference between the two methods after fluid challenge (β coefficient -0.12, P = 0.013). CONCLUSIONS Pressure recording analytical method and bioreactance provided similar SVI estimation at stable hemodynamic conditions, while bioreactance SVI values appeared significantly lower than PRAM at the end of CPB and after fluid replacement.
Validation of HOMA-IR in a model of insulin-resistance induced by a high-fat diet in Wistar rats.
Objective The present study aimed to validate homeostasis model assessment of insulin resistance (HOMA-IR) in relation to the insulin tolerance test (ITT) in a model of insulin-resistance in Wistar rats induced by a 19-week high-fat diet. Materials and methods A total of 30 male Wistar rats weighing 200-300 g were allocated into a high-fat diet group (HFD) (55% fat-enriched chow, ad lib, n = 15) and a standard-diet group (CD) standard chow, ad lib, n = 15), for 19 weeks. ITT was determined at baseline and in the 19th week. HOMA-IR was determined between the 18-19th week in three different days and the mean was considered for analysis. Area under the curve (AUC-ITT) of the blood glucose excursion along 120 minutes after intra-peritoneal insulin injection was determined and correlated with the corresponding fasting values for HOMA-IR. Results AUC-ITT and HOMA-IR were significantly greater after 19th week in HFD compared to CD (p < 0.001 for both). AUC-OGTT was also higher in HFD rats (p = 0.003). HOMA-IR was strongly correlated (Pearson's) with AUC-ITT r = 0.637; p < 0.0001. ROC curves of HOMA-IR and AUC-ITT showed similar sensitivity and specificity. Conclusion HOMA-IR is a valid measure to determine insulin-resistance in Wistar rats. Arch Endocrinol Metab. 2016;60(2):138-42.
EbbRT: A Framework for Building Per-Application Library Operating Systems
Efficient use of high speed hardware requires operating system components be customized to the application workload. Our general purpose operating systems are ill-suited for this task. We present EbbRT, a framework for constructing per-application library operating systems for cloud applications. The primary objective of EbbRT is to enable highperformance in a tractable and maintainable fashion. This paper describes the design and implementation of EbbRT, and evaluates its ability to improve the performance of common cloud applications. The evaluation of the EbbRT prototype demonstrates memcached, run within a VM, can outperform memcached run on an unvirtualized Linux. The prototype evaluation also demonstrates an 14% performance improvement of a V8 JavaScript engine benchmark, and a node.js webserver that achieves a 50% reduction in 99th percentile latency compared to it run on Linux.
Microsoft Hololens Tutorial
Provides an abstract of the tutorial presentation and a brief professional biography of the presenter. The complete presentation was not made available for publication as part of the conference proceedings.
MOBILE ROBOT TRAJECTORY TRACKING USING MODEL PREDICTIVE CONTROL
This work focus on the application of model-based predictiv e control (MPC) to the trajectory tracking problem of nonholonomic wheeled mobile robots (WMR). The main motivation of the use of MPC in this case relies on its ability in considering, in a straightforward way, control a nd state constraints that naturally arise in practical proble ms. Furthermore, MPC techniques consider an explicit performa nce criterion to be minimized during the computation of the cont r l law. The trajectory tracking problem is solved using two approaches: (1) nonlinear MPC and (2) linear MPC. Simulatio n results are provided in order to show the effectiveness of bo th schemes. Considerations regarding the computational effo rt of the MPC are developed with the purpose of analyzing the real-time implementation viability of the proposed techni ques.
Reproducibility and Validity of the 6-Minute Walk Test Using the Gait Real-Time Analysis Interactive Lab in Patients with COPD and Healthy Elderly
BACKGROUND The 6-minute walk test (6MWT) in a regular hallway is commonly used to assess functional exercise capacity in patients with chronic obstructive pulmonary disease (COPD). However, treadmill walking might provide additional advantages over overground walking, especially if virtual reality and self-paced treadmill walking are combined. Therefore, this study aimed to assess the reproducibility and validity of the 6MWT using the Gait Real-time Analysis Interactive Lab (GRAIL) in patients with COPD and healthy elderly. METHODOLOGY/RESULTS Sixty-one patients with COPD and 48 healthy elderly performed two 6MWTs on the GRAIL. Patients performed two overground 6MWTs and healthy elderly performed one overground test. Differences between consecutive 6MWTs and the test conditions (GRAIL vs. overground) were analysed. Patients walked further in the second overground test (24.8 m, 95% CI 15.2-34.4 m, p<0.001) and in the second GRAIL test (26.8 m, 95% CI 13.9-39.6 m). Healthy elderly improved their second GRAIL test (49.6 m, 95% CI 37.0-62.3 m). The GRAIL 6MWT was reproducible (intra-class coefficients = 0.65-0.80). The best GRAIL 6-minute walk distance (6MWD) in patients was shorter than the best overground 6MWD (-27.3 ± 49.1 m, p<0.001). Healthy elderly walked further on the GRAIL than in the overground condition (23.6 ± 41.4 m, p<0.001). Validity of the GRAIL 6MWT was assessed and intra-class coefficient values ranging from 0.74-0.77 were found. CONCLUSION The GRAIL is a promising system to assess the 6MWD in patients with COPD and healthy elderly. The GRAIL 6MWD seems to be more comparable to the 6MWDs assessed overground than previous studies on treadmills have reported. Furthermore, good construct validity and reproducibility were established in assessing the 6MWD using the GRAIL in patients with COPD and healthy elderly.
CATENARY CURVE FITTING FOR GEOMETRIC CALIBRATION
In modern road surveys, hanging power cables are among the most commonly-found geometric features. These cables are catenary curves that are conventionally modelled with three parameters in 2D Cartesian space. With the advent and popularity of the mobile mapping system (MMS), the 3D point clouds of hanging power cables can be captured within a short period of time. These point clouds, similarly to those of planar features, can be used for feature-based self-calibration of the system assembly errors of an MMS. However, to achieve this, a well-defined 3D equation for the catenary curve is needed. This paper proposes three 3D catenary curve models, each having different parameters. The models are examined by least squares fitting of simulated data and real data captured with an MMS. The outcome of the fitting is investigated in terms of the residuals and correlation matrices. Among the proposed models, one of them could estimate the parameters accurately and without any extreme correlation between the variables. This model can also be applied to those transmission lines captured by airborne laser scanning or any other hanging cable-like objects.
Arterial blood pressure as a predictor of the response to fluid administration in euvolemic nonhypotensive or hypotensive isoflurane-anesthetized dogs.
OBJECTIVE To determine the effects of rapid small-volume fluid administration on arterial blood pressure measurements and associated hemodynamic variables in isoflurane-anesthetized euvolemic dogs with or without experimentally induced hypotension. DESIGN Prospective, randomized, controlled study. ANIMALS 13 healthy dogs. PROCEDURES Isoflurane-anesthetized dogs were randomly assigned to conditions of nonhypotension or hypotension (mean arterial blood pressure, 45 to 50 mm Hg) and treatment with lactated Ringer's solution (LRS) or hetastarch (3 or 10 mL/kg [1.4 or 4.5 mL/lb] dose in a 5-minute period or 3 mL/kg dose in a 1-minute period [4 or 5 dogs/treatment; ≥ 10-day interval between treatments]). Hemodynamic variables were recorded before and for up to 45 minutes after fluid administration. RESULTS IV administration of 10 mL/kg doses of LRS or hetastarch in a 5-minute period increased right atrial and pulmonary arterial pressures and cardiac output (CO) when dogs were nonhypotensive or hypotensive, compared with findings before fluid administration; durations of these effects were greater after hetastarch administration. Intravenous administration of 3 mL of hetastarch/kg in a 5-minute period resulted in an increase in CO when dogs were nonhypotensive. Intravenous administration of 3 mL/kg doses of LRS or hetastarch in a 1-minute period increased right atrial pressure and CO when dogs were nonhypotensive or hypotensive. CONCLUSIONS AND CLINICAL RELEVANCE Administration of LRS or hetastarch (3 or 10 mL/kg dose in a 5-minute period or 3 mL/kg dose in a 1-minute period) improved CO in isoflurane-anesthetized euvolemic dogs with or without hypotension. Overall, arterial blood pressure measurements were a poor predictor of the hemodynamic response to fluid administration.
Listen and Translate: A Proof of Concept for End-to-End Speech-to-Text Translation
HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Listen and Translate: A Proof of Concept for End-to-End Speech-to-Text Translation Alexandre Bérard, Olivier Pietquin, Laurent Besacier, Christophe Servan
PHOTOPHYSICS OF FULLERENE-DOPED MACROMOLECULES: NONLINEAR ABSORBERS, DIFFRACTIVE ELEMENTS, AND SPATIAL LIGHT MODULATORS
1, , have stimulated a search for new media, which absorb the laser irradiation efficiently over broad spectral range and in wide incident energy region. The π-conjugated organic materials are promising compounds for these purposes because they allow their physical properties to be modified by doping with various impurities, including fullerenes. In the present paper the fullerene-doping effect on spectral, nonlinear optical properties, and dynamic parameters of conjugated organic systems based on 2-cyclooctylamino-5-nitropyridine, polyimide, polyaniline, etc. has been studied. A fullerene introduction in these materials has been used due to high electron affinity of fullerenes (2.6–2.7 eV) that allows the intermolecular donor-acceptor interaction to be reinforced. 4 The new charge transfer complex developed provokes new nanostructures potentials. ae This complex is of a higher excited state absorption cross-section than the ground state one. That is why, fullerene-doped systems are reverse saturable absorption materials and may be used for a sensor and eyes protection from high laser irradiation. ae Fullerene sensitization and complex formation reveal new spectral and energy ranges. The bathochromic shift is observed and new absorption band in the IR spectral range appears in the fullerene-doped materials. ae The new field gradient is formed as the charge transfer path changes (the charge is transferred from the intramolecular donor fragment not to the intramolecular acceptor one but to fullerene). As the result, the photorefractive effect is possible in these structures. The large laserinduced change in the refractive index predicts large nonlinear refraction and third order susceptibility. ae As a new way, the fullerene-doped complex influences the rotation ability of liquid crystals dipoles. The temporal characteristics of nematic LC cells have been improved at least by one order of magnitude. Experimental Materials. 3-5 % solutions of conjugated materials in 1,1,2,2– tetrachloroethane (TClE) were used. The solutions were doped with fullerenes C60 or/and C70. The fullerene concentration was varied from 0.2 wt.% to 5 wt.%. It should be noticed that fullerenes are of relatively high solubility (about 5.3 mg mL -1 ) 5 in TClE. 2-cyclooctylamino-5-nitropyridine, polyimide, polyaniline, and dispersed nematic liquid crystal compounds based on materials mentioned above were used. 2.5–5 µm thick films were spun on the glass or quarts substrates. 10-12 µm thick liquid crystal structures were prepared under condition when the refractive index of ordinary wave was coincided with the refractive index of conjugated materials. A general view of the films obtained based on the materials treated was presented in paper. 6
Cosmo : A Life-like Animated Pedagogical Agent with Deictic Believability
Life-like animated interface agents for knowledgebased learning environments can provide timely, customized advice to support students' problem solving. Because of their strong visual presence, they hold signi cant promise for substantially increasing students' enjoyment of their learning experiences. A key problem posed by life-like agents that inhabit arti cial worlds is deictic believability. In the same manner that humans refer to objects in their environment through judicious combinations of speech, locomotion, and gesture, animated agents should be able to move through their environment, and point to and refer to objects appropriately as they provide problemsolving advice. In this paper we describe a framework for achieving deictic believability in animated agents. A deictic behavior planner exploits a world model and the evolving explanation plan as it selects and coordinates locomotive, gestural, and speech behaviors. The resulting behaviors and utterances are believable, and the references are unambiguous. This approach to spatial deixis has been implemented in a life-like animated agent, Cosmo, who inhabits a learning environment for the domain of Internet packet routing. The product of a large multidisciplinary team of computer scientists, 3D modelers, graphic artists, and animators, Cosmo provides realtime advice to students that is \deictically believable" as they escort packets through a virtual world of interconnected routers.
Assessment of maturity models for smart cities supported by maturity model design principles
The concept of smart cities has gained relevance over the years. City leaders plan investments with the aim of evolving the city towards a smart city. Several models and frameworks, of which maturity models, provide directions or support such investment decisions. Nevertheless, it is not always clear whether the maturity models developed so far are able to fulfil their proposed objectives. This paper identifies smart city maturity models and assesses them, taking into account an approach based on the design principles framework proposed by Pöppelbuß & Röglinger [1]. The main objective of this paper is to infer on the relevance of current maturity models for smart cities, taking into account their purpose. Furthermore, it aims at creating awareness towards the need for completeness when developing a maturity model.
The many faces of H89: a review.
H89 is marketed as a selective and potent inhibitor of protein kinase A (PKA). Since its discovery, it has been used extensively for evaluation of the role of PKA in the heart, osteoblasts, hepatocytes, smooth muscle cells, neuronal tissue, epithelial cells, etc. Despite the frequent use of H89, its mode of specific inhibition of PKA is still not completely understood. It has also been shown that H89 inhibits at least 8 other kinases, while having a relatively large number of PKA-independent effects which may seriously compromise interpretation of data. Thus, while recognizing its kinase inhibiting properties, it is advised that H89 should not be used as the single source of evidence of PKA involvement. H-89 should be used in conjunction with other PKA inhibitors, such as Rp-cAMPS or PKA analogs.
A Forensic Analysis of the CASIA-Iris V 4 Database
The photo response non-uniformity (PRNU) of a digital image sensor can be useful to enhance a biometric systems security by ensuring the authenticity and integrity of the images acquired with the biometric sensor, i.e. by identifying the image source or detecting a possible tampering of the images presented to the biometric system. Passive image forensic techniques have shown to be suited for this tasks for digital consumer cameras. Previous studies regarding the feasibility of this application using biometric sensors have been conducted for iris and fingerprint sensors by studying the differentiability of the sensors PRNU fingerprints. The results obtained on the CASIA-Iris V4 database showed a high variation among the various subsets of the database. The researchers assumed that this high variation could either be caused by the highly correlated data or come from the usage of different sensors for the acquisition of the subsets. To investigate the latter issue a forensic investigation on the CASIA-Iris V4 database has been performed in this thesis, where an existing forensic technique has been applied and additionally several novel forensic techniques have been proposed to detect the presence of images from multiple sensors in the image data sets. Furthermore, the forensic techniques have been applied on a test data set with a known number of sensors first to evaluate their performance. Since there is no specific documentation on the number of sensors used for the acquisition, the investigation on the CASIA-Iris V4 database has been conducted in a blind manner and without any a priori knowledge about the sensors. In addition different PRNU enhancement approaches have been applied for the investigation to reduce the contamination of the PRNU fingerprints by the image content, which is an issue in this scenario due to the strong correlation of the content among all images under investigation.
Discrepant screening mammography assessments at blinded and non-blinded double reading: impact of arbitration by a third reader on screening outcome
To determine the value of adding a third reader for arbitration of discrepant screening mammography assessments. We included a consecutive series of 84,927 digital screening mammograms, double read in a blinded or non-blinded fashion. Arbitration was retrospectively performed by a third screening radiologist. Two years’ follow-up was performed. Discrepant readings comprised 57.2 % (830/1452) and 29.1 % (346/1188) of recalls at blinded and non-blinded double readings, respectively. At blinded double reading, arbitration would have decreased recall rate (3.4 to 2.2 %, p < 0.001) and programme sensitivity (83.2 to 76.0 %, p = 0.013), would not have influenced the cancer detection rate (CDR; 7.5 to 6.8 per 1,000 screens, p = 0.258) and would have increased the positive predictive value of recall (PPV; 22.3 to 31.2 %, p < 0.001). At non-blinded double reading, arbitration would have decreased recall rate (2.8 to 2.3 %, p < 0.001) and increased PPV (23.2 to 27.5 %, p = 0.021), but would not have affected CDR (6.6 to 6.3 per 1,000 screens, p = 0.604) and programme sensitivity (76.0 to 72.7 %, p = 0.308). Arbitration of discrepant screening mammography assessments is a good tool to improve recall rate and PPV, but is not desirable as it reduces the programme sensitivity at blinded double reading. • Blinded double reading results in higher programme sensitivity than non-blinded reading. • Discrepant readings occur more often at blinded compared to non-blinded reading. • Arbitration of discrepant readings reduces the recall rate and PPV. • Arbitration would reduce the programme sensitivity at blinded double reading.
The scaled unscented transformation
This paper describes a generalisation of the unscented transformation (UT) which allows sigma points to be scaled to an arbitrary dimension. The UT is a method for predicting means and covariances in nonlinear systems. A set of samples are deterministically chosen which match the mean and covariance of a (not necessarily Gaussian-distributed) probability distribution. These samples can be scaled by an arbitrary constant. The method guarantees that the mean and covariance second order accuracy in mean and covariance, giving the same performance as a second order truncated filter but without the need to calculate any Jacobians or Hessians. The impacts of scaling issues are illustrated by considering conversions from polar to Cartesian coordinates with large angular uncertainties.
Bit-pragmatic deep neural network computing
Deep Neural Networks expose a high degree of parallelism, making them amenable to highly data parallel architectures. However, data-parallel architectures often accept inefficiency in individual computations for the sake of overall efficiency. We show that on average, activation values of convolutional layers during inference in modern Deep Convolutional Neural Networks (CNNs) contain 92% zero bits. Processing these zero bits entails ineffectual computations that could be skipped. We propose Pragmatic (PRA), a massively data-parallel architecture that eliminates most of the ineffectual computations on-the-fly, improving performance and energy efficiency compared to state-of-the-art high-performance accelerators [5]. The idea behind PRA is deceptively simple: use serial-parallel shift-and-add multiplication while skipping the zero bits of the serial input. However, a straightforward implementation based on shift-and-add multiplication yields unacceptable area, power and memory access overheads compared to a conventional bit-parallel design. PRA incorporates a set of design decisions to yield a practical, area and energy efficient design. Measurements demonstrate that for convolutional layers, PRA is 4.31X faster than DaDianNao [5] (DaDN) using a 16-bit fixed-point representation. While PRA requires 1.68X more area than DaDN, the performance gains yield a 1.70X increase in energy efficiency in a 65nm technology. With 8-bit quantized activations, PRA is 2.25X faster and 1.31X more energy efficient than an 8-bit version of DaDN.
Physician well-being: A powerful way to improve the patient experience.
Improving the patient experience—or the patientcenteredness of care—is a key focus of health care organizations. With a shift to reimbursement models that reward higher patient experience scores, increased competition for market share, and cost constraints emphasizing the importance of patient loyalty, many health care organizations consider optimizing the patient experience to be a key strategy for sustaining financial viability. A survey conducted by The Beryl Institute, which supports research on the patient experience, found that hospital executives ranked patient experience as a clear top priority, falling a close second behind quality and safety.1 According to the institute, engagement with employees, including physicians, is the most effective way to improve the patient experience, as more engaged, satisfied staffers provide better service and care to patients.2 Data on physician satisfaction support this supposition: Research has shown that physician career satisfaction closely correlates with patient satisfaction within a geographic region.3 Given the importance of physician satisfaction to the patient experience, it is concerning that dissatisfaction and burnout are on the rise among physicians. In a 2012 online poll of more than 24,000 physicians across the country, only 54 percent would choose medicine again as a career, down from 69 percent in 2011.4 Almost half of the more than 7,000 physicians responding to a recent online survey reported at least one symptom of burnout.5 Physician dissatisfaction and burnout have a profound negative effect on the patient experience of care. Physician leaders can take steps within their organizations to foster physician well-being, improving the care experience for physicians and patients while strengthening the sustainability of their organizations.
“One Teabag Is Better than Four”: Participants Response to the Discontinuation of 2% PRO2000/5 Microbicide Gel in KwaZulu-Natal, South Africa
INTRODUCTION The Microbicides Development Programme evaluated the safety and effectiveness of 0.5% and 2% PRO2000/5 microbicide gels in reducing the risk of vaginally acquired HIV. In February 2008 the Independent Data Monitoring Committee recommended that evaluation of 2% PRO2000/5 gel be discontinued due to futility. The Africa Centre site systematically collected participant responses to this discontinuation. METHODS Clinic and field staff completed field reports using ethnographic participant observation techniques. In-depth-interviews and focus group discussions were conducted with participants discontinued from 2% gel. A total of 72 field reports, 12 in-depth-interviews and 3 focus groups with 250 women were completed for this analysis. Retention of discontinued participants was also analysed. Qualitative data was analysed using NVivo 2 and quantitative data using STATA 10.0. RESULTS Participants responded initially with fear that discontinuation was due to harm, followed by acceptance after effective messaging, and finally with disappointment. Participants reported that their initial fear was exacerbated by being contacted and advised to visit the clinic for information about the closure. Operational changes were subsequently made to the contact procedures. By incorporating feedback from participants, messages were continuously revised to ensure that information was comprehensible and misconceptions were addressed quickly thereby enabling participants to accept the discontinuation. Participants were disappointed that 2% PRO2000/5 was being excluded as a HIV prevention option, but also that they would no longer have access to gel that improved their sexual relationships with their partners and assisted condom negotiations. In total 238 women were discontinued from gel and 185 (78%) went on to complete their scheduled follow-up period. DISCUSSION The use of qualitative social science techniques allowed the site team to amend operational procedures and messaging throughout the discontinuation period. This proved instrumental in ensuring that the discontinuation was successfully completed in a manner that was both understandable and acceptable to participants. TRIAL REGISTRATION Current Controlled Trials. ISRCTN64716212.
iMASKO: A Genetic Algorithm Based Optimization Framework for Wireless Sensor Networks
In this paper we present the design and implementation of a generic GA-based optimization framework iMASKO (iNL@MATLAB Genetic Algorithm-based Sensor NetworK Optimizer) to optimize the performance metrics of wireless sensor networks. Due to the global search property of genetic algorithms, the framework is able to automatically and quickly fine tune hundreds of possible solutions for the given task to find the best suitable tradeoff. We test and evaluate the framework by using it to explore a SystemC-based simulation process to tune the configuration of the unslotted CSMA/CA algorithm of IEEE 802.15.4, aiming to discover the most available tradeoff solutions for the required performance metrics. In particular, in the test cases different sensor node platforms are under investigation. A weighted sum based cost function is used to measure the optimization effectiveness and capability of the framework. In the meantime, another experiment is performed to test the framework’s optimization characteristic in multi-scenario and multi-objectives conditions.
Демиург в античной космогонии
The article begins with a brief survey of the Early Greek cosmogonies of Pherecydes of Syros and of the Orphics. My major concerns are the figure of Chronos and the demiurgic activity of Zeus. Ancient cosmogony is compared with the contemporary theory of time by I. Prigogine, who, not unlike the Ancients and in contrast with the standard cosmological theory of the Big Bang, thinks that Time did not originate with our world and will not end with it. Then I examine the Platonic kybernētēs metaphor and the ideas, associated with it in the Ancient philosophy against the background of a broader literary tradition. The topic is finally illustrated by an unusual Celtic coin struck in Normandy, France, c. 100 BCE, showing a model ship as the victor’s prize in a chariot race. The image can be placed both in mythological and historical context. It is fascinating to observe how an unknown artist independently follows the steps of the Greek philosopher in his reinterpreting of a complicated mythological image in a political sense.
Dietary total antioxidant capacity and mortality in the PREDIMED study.
PURPOSE The aim of the present study was to assess the association between the dietary total antioxidant capacity, the dietary intake of different antioxidants and mortality in a Mediterranean population at high cardiovascular disease risk. METHODS A total of 7,447 subjects from the PREDIMED study (multicenter, parallel group, randomized controlled clinical trial), were analyzed treating data as an observational cohort. Different antioxidant vitamin intake and total dietary antioxidant capacity were calculated from a validated 137-item food frequency questionnaire at baseline and updated yearly. Deaths were ascertained through contact with families and general practitioners, review of medical records and consultation of the National Death Index. Cox regression models were fitted to assess the relationship between dietary total antioxidant capacity and mortality. Dietary total antioxidant capacity was estimated using ferric-reducing antioxidant power assays. RESULTS A total of 319 deaths were recorded after a median follow-up of 4.3 years. Subjects belonging to the upper quintile of antioxidant capacity were younger, ex-smokers, with high educational level, and more active and had higher alcohol intake. Multivariable-adjusted models revealed no statistically significant difference between total dietary antioxidant capacity and mortality (Q5 vs. Q1 ref HR 0.85; 95% CI 0.60-1.20) neither for the intake of all the vitamins studied. CONCLUSIONS No statistically significant association was found between antioxidant capacity and total mortality in elderly subjects at high cardiovascular risk.
Reflecting on reflection: framing a design landscape
Designing for reflection is becoming of increasing interest to HCI researchers, especially as digital technologies move to supporting broader professional and quality of life issues. However, the term 'reflection' is being used and designed for in diverse ways and often with little reference to vast amount of literature on the topic outside of HCI. Here we synthesize this literature into a framework, consisting of aspects such as purposes of reflection, conditions for reflection and levels of reflection (where the levels capture the behaviours and activities associated with reflection). We then show how technologies can support these different aspects and conclude with open questions that can guide a more systematic approach to how we understand and design for support of reflection.
Tell-and-Answer: Towards Explainable Visual Question Answering using Attributes and Captions
Visual Question Answering (VQA) has attracted attention from both computer vision and natural language processing communities. Most existing approaches adopt the pipeline of representing an image via pre-trained CNNs, and then using the uninterpretable CNN features in conjunction with the question to predict the answer. Although such end-to-end models might report promising performance, they rarely provide any insight, apart from the answer, into the VQA process. In this work, we propose to break up the end-to-end VQA into two steps: explaining and reasoning, in an attempt towards a more explainable VQA by shedding light on the intermediate results between these two steps. To that end, we first extract attributes and generate descriptions as explanations for an image using pre-trained attribute detectors and image captioning models, respectively. Next, a reasoning module utilizes these explanations in place of the image to infer an answer to the question. The advantages of such a breakdown include: (1) the attributes and captions can reflect what the system extracts from the image, thus can provide some explanations for the predicted answer; (2) these intermediate results can help us identify the inabilities of both the image understanding part and the answer inference part when the predicted answer is wrong. We conduct extensive experiments on a popular VQA dataset and dissect all results according to several measurements of the explanation quality. Our system achieves comparable performance with the state-of-theart, yet with added benefits of explanability and the inherent ability to further improve with higher quality explanations.
Xbase: implementing domain-specific languages for Java
Xtext is an open-source framework for implementing external, textual domain-specific languages (DSLs). So far, most DSLs implemented with Xtext and similar tools focus on structural aspects such as service specifications and entities. Because behavioral aspects are significantly more complicated to implement, they are often delegated to general-purpose programming languages. This approach introduces complex integration patterns and the DSL's high level of abstraction is compromised. We present Xbase as part of Xtext, an expression language that can be reused via language inheritance in any DSL implementation based on Xtext. Xbase expressions provide both control structures and program expressions in a uniform way. Xbase is statically typed and tightly integrated with the Java type system. Languages extending Xbase inherit the syntax of a Java-like expression language as well as language infrastructure components, including a parser, an unparser, a linker, a compiler and an interpreter. Furthermore, the framework provides integration into the Eclipse IDE including debug and refactoring support. The application of Xbase is presented by means of a domain model language which serves as a tutorial example and by the implementation of the programming language Xtend. Xtend is a functional and object-oriented general purpose language for the Java Virtual Machine (JVM). It is built on top of Xbase which is the reusable expression language that is the foundation of Xtend.
A Three-Layer Architecture for E-Contract Enforcement in an E-Service Environment
In an e-service environment, contracts are important for attaining business process interoperability and enforcing their proper enactment. An e-contract is the computerized facilitation or automation of a contract in a crossorganizational business process. We find that e-contract enforcement can be divided into multiple layers and perspectives, which has not been adequately addressed in the literature. This problem is challenging as it involves monitoring the enactment of business processes in counter parties outside an organization’s boundary. This paper presents an architecture for e-contract enforcement with three layers, viz., document layer, business layer, and implementation layer. In the document layer, contracts are composed of different types of clauses. In the business layer, e-contract enforcement activities are defined through the realization of contract clauses as business rules in event-condition-action (ECA) form. In the implementation layer, cross-organizational e-contract enforcement interfaces are implemented with contemporary Enterprise Java Bean and Web services. We present a methodology for the engineering of econtracts enforcement from a high-level document-view down to the implementation layer based on this architecture, using a supply-chain example. As a result, econtracts can be seamlessly defined and enforced. Conceptual models of various layers are given in the Unified Modeling Language (UML).
Effectiveness of a cognitive behavioural therapy-based rehabilitation programme (Progressive Goal Attainment Program) for patients who are work-disabled due to back pain: study protocol for a multicentre randomised controlled trial
BACKGROUND Psychologically informed rehabilitation programmes such as the Progressive Goal Attainment Program (PGAP) have the potential to address pain-related disability by targeting known psychological factors that inhibit rehabilitation progress. However, no randomised controlled trials of this intervention exist and it has not been evaluated in the Irish health service context. Our objective was to evaluate the clinical efficacy and cost-effectiveness of the PGAP in a multicentre randomised controlled trial with patients who are work-disabled due to back pain. METHODS AND DESIGN Adult patients (ages 18 years and older) with nonmalignant back pain who are work-disabled because of chronic pain and not involved in litigation in relation to their pain were invited to take part. Patients were those who show at least one elevated psychosocial risk factor (above the 50th percentile) on pain disability, fear-based activity avoidance, fatigue, depression or pain catastrophizing. Following screening, patients are randomised equally to the intervention or control condition within each of the seven trial locations. Patients allocated to the control condition receive usual medical care only. Patients allocated to the PGAP intervention condition attend a maximum of 10 weekly individual sessions of structured active rehabilitation in addition to usual care. Sessions are delivered by a clinical psychologist and focus on graded activity, goal-setting, pacing activity and cognitive-behavioural therapy techniques to address possible barriers to rehabilitation.The primary analysis will be based on the amount of change on the Roland Morris Disability Questionnaire posttreatment. We will also measure changes in work status, pain intensity, catastrophizing, depression, fear avoidance and fatigue. Outcome measures are collected at baseline, posttreatment and 12-month follow-up. Health-related resource use is also collected pre- and posttreatment and at 12-month follow-up to evaluate cost-effectiveness. DISCUSSION This study will be the first randomized controlled trial of the PGAP in chronic pain patients and will provide important information about the clinical and cost effectiveness of the programme as well as its feasibility in the context of the Irish health service. TRIAL REGISTRATION Current Controlled Trials: ISRCTN61650533.
Supersonic Flow Separation with Application to Rocket Engine Nozzles
The increasing demand for higher performance in rocket launchers promotes the development of nozzles with higher performance, which basically is achieved by increasing the expansion ratio. However, this may lead to flow separation and ensuing instationary, asymmetric forces, so-called side-loads, which may present life-limiting constraints on both the nozzle itself and other engine components. Substantial gains can be made in the engine performance if this problem can be overcome, and hence different methods of separation control have been suggested. However, none has so far been implemented in full scale, due to the uncertainties involved in modeling and predicting the flow phenomena involved. In the present work the causes of unsteady and unsymmetrical flow separation and resulting side-loads in rocket engine nozzles are investigated. This involves the use of a combination of analytical, numerical and experimental methods, which all are presented in the thesis. A main part of the work is based on sub-scale testing of model nozzles operated with air. Hence, aspects on how to design sub-scale models that are able to capture the relevant physics of full-scale rocket engine nozzles are highlighted. Scaling laws like those presented in here are indispensable for extracting side-load correlations from sub-scale tests and applying them to full-scale nozzles. Three main types of side-load mechanisms have been observed in the test campaigns, due to: (i) intermittent and random pressure fluctuations, (ii) transition in separation pattern and (iii) aeroelastic coupling. All these three types are described and exemplified by test results together with analysis. A comprehensive, up-to-date review of supersonic flow separation and side-loads in internal nozzle flows is given with an in-depth discussion of different approaches for predicting the phenomena. This includes methods for predicting shock-induced separation, models for predicting side-load levels and aeroelastic coupling effects. Examples are presented to illustrate the status of various methods, and their advantages and shortcomings are discussed. A major part of the thesis focus on the fundamental shock-wave turbulent boundary layer interaction (SWTBLI) and a physical description of the phenomenon is given. This description is based on theoretical concepts, computational results and experimental observation, where, however, emphasis is placed on the rocket-engineering perspective. This work connects the industrial development of rocket engine nozzles to the fundamental research of the SWTBLI phenomenon and shows how these research results can be utilized in real applications. The thesis is concluded with remarks on active and passive flow control in rocket nozzles and directions of future research. The present work was performed at VAC's Space Propulsion Division within the framework of European space cooperation.
Accurate Modeling and Robust Hovering Control for a Quad-rotor VTOL Aircraft
Quad-robot type (QRT) unmanned aerial vehicles (UAVs) have been developed for quick detection and observation of the circumstances under calamity environment such as indoor fire spots. The UAV is equipped with four propellers driven by each electric motor, an embedded controller, an Inertial Navigation System (INS) using three rate gyros and accelerometers, a CCD (Charge Coupled Device) camera with wireless communication transmitter for observation, and an ultrasonic range sensor for height control. Accurate modeling and robust flight control of QRT UAVs are mainly discussed in this work. Rigorous dynamic model of a QRT UAV is obtained both in the reference and body frame coordinate systems. A disturbance observer (DOB) based controller using the derived dynamic models is also proposed for robust hovering control. The control input induced by DOB is helpful to use simple equations of motion satisfying accurately derived dynamics. The developed hovering robot shows stable flying performances under the adoption of DOB and the vision based localization method. Although a model is incorrect, DOB method can design a controller by regarding the inaccurate part of the model J. Kim Department of Mechanical Engineering, Seoul National University of Technology, Seoul, South Korea e-mail: [email protected] M.-S. Kang Department of Mechatronics Engineering, Hanyang University, Ansan, South Korea e-mail: [email protected] S. Park (B) Division of Applied Robot Technology, Korea Institute of Industrial Technology, Ansan, South Korea e-mail: [email protected] 10 J Intell Robot Syst (2010) 57:9–26 and sensor noises as disturbances. The UAV can also avoid obstacles using eight IR (Infrared) and four ultrasonic range sensors. This kind of micro UAV can be widely used in various calamity observation fields without danger of human beings under harmful environment. The experimental results show the performance of the proposed control algorithm.
Entropy Production in a Cell and Reversal of Entropy Flow as an Anticancer Therapy
The entropy production rate of cancer cell is always higher than healthy cell under the case of no external field applied. Different entropy production between two kinds of cells determines the direction of entropy flow among cells. The entropy flow is the carrier of information flow. The entropy flow from cancer to healthy cell takes along the harmful information of cancerous cell, propagating its toxic action to healthy tissues. We demonstrate that a low-frequency and low-intensity electromagnetic field or ultrasound irradiation may increase the entropy production rate of a cell in normal tissue than that in cancer, consequently reverse the direction of entropy current between two kinds of cells. The modification of PH value of cells may also cause the reversal of the direction of entropy flow between healthy and cancerous cells. So, the biological tissue under the irradiation of electromagnetic field or ultrasound or under the appropriate change of cell acidity can avoid the propagation of harmful information from cancer cells. We suggest that this entropy mechanism possibly provides a basis for a novel approach to anticancer therapy. Thermodynamic entropy is expressed by (1) ln B S k W = where W is the number of microscopic states which are related to a given macroscopic thermodynamic state and kB is the Boltzmann constant. Entropy is a measure of disorder. From general physical principles, Schrodinger first indicated that life should remain in a low-entropy state or “an organism feeds with negative entropy”[1]. This means that entropy production in an organism is canceled by the outward entropy flow so that the system remains in a highly ordered state of low entropy. However, following our point of view, negative entropy (or negentropy) is only the first half of the story. The living organism is a chemical engine in which a series of chemical reactions take place one by one in an appropriate sequence. Accordingly, the energy transfer in an organism in the normal state is so efficient that the entropy production is minimized. Minimal entropy production in a healthy cell is the second half of the story [2]. We shall compare qualitatively and demonstrate that the entropy production rate (or “entropy production” for short) of a healthy cell is lower than that of a cancerous cell if no external energy input [3-5]. However, when the appropriate external energy is applied to tissues, the rate of entropy production of normal cells may exceed that of cancerous cells. As an example, we shall discuss the entropy production of cells under irradiation of ultrasound and alternative electromagnetic fields. We shall prove the B
Rapid and highly specific screening for NPM1 mutations in acute myeloid leukemia
NPM1 mutations, the most frequent molecular alterations in acute myeloid leukemia (AML), have become important for risk stratification and treatment decisions for patients with normal karyotype AML. Rapid screening for NPM1 mutations should be available shortly after diagnosis. Several methods for detecting NPM1 mutations have been described, most of which are technically challenging and require additional laboratory equipment. We developed and validated an assay that allows specific, rapid, and simple screening for NPM1 mutations. FAST PCR spanning exons 8 to 12 of the NPM1 gene was performed on 284 diagnostic AML samples. PCR products were visualized on a 2 % agarose E-gel and verified by direct sequencing. The FAST PCR screening method showed a specificity and sensitivity of 100 %, i.e., all mutated cases were detected, and none of negative cases carried mutations. The limit of detection was at 5–10 % of mutant alleles. We conclude that the FAST PCR assay is a highly specific, rapid (less than 2 h), and sensitive screening method for the detection of NPM1 mutations. Moreover, this method is inexpensive and can easily be integrated in the routine molecular diagnostic work-up of established risk factors in AML using standard laboratory equipment.
Implicit Age Cues in Resumes: Subtle Effects on Hiring Discrimination
Anonymous resume screening, as assumed, does not dissuade age discriminatory effects. Building on job market signaling theory, this study investigated whether older applicants may benefit from concealing explicitly mentioned age signals on their resumes (date of birth) or whether more implicit/subtle age cues on resumes (older-sounding names/old-fashioned extracurricular activities) may lower older applicants' hirability ratings. An experimental study among 610 HR professionals using a mixed factorial design showed hiring discrimination of older applicants based on implicit age cues in resumes. This effect was more pronounced for older raters. Concealing one's date of birth led to overall lower ratings. Study findings add to the limited knowledge on the effects of implicit age cues on hiring discrimination in resume screening and the usefulness of anonymous resume screening in the context of age. Implications for research and practice are discussed.
Introduction to Fillers.
BACKGROUND Over the last few years, injectable soft-tissue fillers have become an integral part of cosmetic therapy, with a wide array of products designed to fill lines and folds and revolumize the face. METHODS This review describes cosmetic fillers currently approved by the Food and Drug Administration and discusses new agents under investigation for use in the United States. RESULTS Because of product refinements over the last few years-greater ease of use and longevity, the flexibility of multiple formulations within one line of products, and the ability to reverse poor clinical outcomes-practitioners have gravitated toward the use of biodegradable agents that stimulate neocollagenesis for sustained aesthetic improvements lasting up to a year or more with minimal side effects. Permanent implants provide long-lasting results but are associated with greater potential risk of complications and require the skilled hand of the experienced injector. CONCLUSIONS A variety of biodegradable and nonbiodegradable filling agents are available or under investigation in the United States. Choice of product depends on injector preference and the area to be filled. Although permanent agents offer significant clinical benefits, modern biodegradable fillers are durable and often reversible in the event of adverse effects.
An unexpected example of protein-templated click chemistry.
Click chemistry is a popular approach to the synthesis of functionalized molecules, and emphasizes the use of practical and reliable reactions. Copper(I)-catalyzed azide–alkyne cycloaddition (CuAAC), which selectively produces anti(1,4)-triazoles in preference to the syn isomer (1,5-triazole), is regarded as a superlative example of click chemistry. The CuAAC reaction can be accelerated by Cu-stabilizing ligands, such as tris[(1-substituted-1H-1,2,3-triazol-4-yl)methyl]amines and tris(2-benzimidazolylmethyl)amines. The catalytic system has received a great deal of use in various fields such as chemical biology and materials science. 5] The 1,3-dipolar cycloaddition of azides with unactivated alkynes occurs much more slowly but is highly chemoselective. This property stimulated the development of “in situ click chemistry” for the field of drug discovery, in which target enzymes are allowed to assemble new inhibitors by linking azides and alkynes that bind to adjacent sites on the protein surface. The linkage reaction does not employ Cu catalysis, but instead relies on acceleration of the otherwise sluggish [3+2] cycloaddition reaction when the reaction partners are held in proximity to each other, often in or near the enzyme active site. In the course of an in situ click chemistry study on histone deacetylase (HDAC), we unexpectedly observed acceleration of the AAC reaction by trace copper associated with the protein in a structurally sensitive manner. Herein we report these findings, which constitute the first example of a Cu-protein complex catalyzing the AAC reaction. HDAC inhibitors are attractive drug candidates for cancer, inflammation, and neurodegenerative disorders. As shown in Figure 1, most HDAC inhibitors consist of a
PixelNN: Example-based Image Synthesis
We present a simple nearest-neighbor (NN) approach that synthesizes highfrequency photorealistic images from an “incomplete” signal such as a lowresolution image, a surface normal map, or edges. Current state-of-the-art deep generative models designed for such conditional image synthesis lack two important things: (1) they are unable to generate a large set of diverse outputs, due to the mode collapse problem. (2) they are not interpretable, making it difficult to control the synthesized output. We demonstrate that NN approaches potentially address such limitations, but suffer in accuracy on small datasets. We design a simple pipeline that combines the best of both worlds: the first stage uses a convolutional neural network (CNN) to map the input to a (overly-smoothed) image, and the second stage uses a pixel-wise nearest neighbor method to map the smoothed output to multiple high-quality, high-frequency outputs in a controllable manner. Importantly, pixel-wise matching allows our method to compose novel high-frequency content by cutting-and-pasting pixels from different training exemplars. We demonstrate our approach for various input modalities, and for various domains ranging from human faces, pets, shoes, and handbags. 12x12 Input (x8) Our Approach (a) Low-Resolution to High-Resolution Surface Normal Map Our Approach (b) Normals-to-RGB Edges Our Approach (c) Edges-to-RGB (d) Edges-to-RGB (Multiple Outputs) (e) Normals-to-RGB (Multiple Outputs) (d) Edges-to-Shoes (Multiple Outputs) (e) Edges-to-Handbags (Multiple Outputs) Figure 1: Our approach generates photorealistic output for various “incomplete” signals such as a low resolution image, a surface normal map, and edges/boundaries for human faces, cats, dogs, shoes, and handbags. Importantly, our approach can easily generate multiple outputs for a given input which was not possible in previous approaches (Isola et al., 2016) due to mode-collapse problem. Best viewed in electronic format.
An automated method of penetration testing
An automated method of penetration testing was proposed, which is deigned to solve the problems such as high cost and low efficiency in the traditional penetration testing. This method consists of two parts: the automatic generating method of the penetration testing scheme and the automatic executing method of penetration testing scheme. We design and implement an original system which can execute penetration testing automatically, it was named AEPT (automatic executing penetration testing). The system integrates the necessary functions of penetration testing and can execute the penetration testing automatically. The experimental results showed that this method can overcome the problems exist in traditional penetration testing and can dramatically improve the efficiency and the accuracy of penetration testing, greatly reduce the cost of penetration testing.
Psychological and Cardiovascular Effects and Short-Term Sequelae of MDMA (“Ecstasy”) in MDMA-Naïve Healthy Volunteers
3,4-methylenedioxymethamphetamine (MDMA, “Ecstasy”) is a recreational drug reported to produce a different psychological profile than that of classic hallucinogens and stimulants. It has, therefore, been tentatively classified into a novel pharmacological class termed entactogens. This double-blind placebo-controlled study examined the effects of a typical recreational dose of MDMA (1.7 mg/kg) in 13 MDMA-naıuml;ve healthy volunteers. MDMA produced an affective state of enhanced mood, well-being, and increased emotional sensitiveness, little anxiety, but no hallucinations or panic reactions. Mild depersonalization and derealization phenomena occurred together with moderate thought disorder, first signs of loss of body control, and alterations in the meaning of percepts. Subjects also displayed changes in the sense of space and time, heightened sensory awareness, and increased psychomotor drive. MDMA did not impair selective attention as measured by the Stroop test. MDMA increased blood pressure moderately, with the exception of one subject who showed a transient hypertensive reaction. This severe increase in blood pressure indicates that the hypertensive effects of MDMA, even at recreational doses, should not be underestimated, particularly in subjects with latent cardiovascular problems. Most frequent acute somatic complaints during the MDMA challenge were jaw clenching, lack of appetite, impaired gait, and restless legs. Adverse sequelae during the following 24 hours included lack of energy and appetite, feelings of restlessness, insomnia, jaw clenching, occasional difficulty concentrating, and brooding. The present findings are consistent with the hypothesis that MDMA produces a different psychological profile than classic hallucinogens or psychostimulants.
PREDICTION OF LIGHT GAS DISTRIBUTION IN CONTAINMENT EXPERIMENTAL FACILITIES USING CFX 4 CODE : JOZEF STEFAN INSTITUTE EXPERIENCE
Two and three-dimensional simulations of experiments on atmosphere mixing and stratification in a nuclear power plant containment were performed with the code CFX4.4, with the inclusion of simple models for steam condensation. The purpose was to assess the applicability of the approach to simulate the behaviour of light gases in containments at accident conditions. The comparisons of experimental and simulated results show that, despite a tendency to simulate more intensive mixing, the proposed approach may replicate the non-homogeneous structure of the atmosphere reasonably well. Introduction One of the nuclear reactor safety issues that have lately been considered using Computational Fluid Dynamics (CFD) codes is the problem of predicting the eventual non-homogeneous concentration of light flammable gas (hydrogen) in the containment of a nuclear power plant (NPP) at accident conditions. During a hypothetical severe accident in a Pressurized Water Reactor NPP, hydrogen could be generated due to Zircaloy oxidation in the reactor core. Eventual high concentrations of hydrogen in some parts of the containment could cause hydrogen ignition and combustion, which could threaten the containment integrity. The purpose of theoretical investigations is to predict hydrogen behaviour at accident conditions prior to combustion. In the past few years, many investigations about the possible application of CFD codes for this purpose have been started [1-5]. CFD codes solve the transport mass, momentum and energy equations when a fluid system is modelled using local instantaneous description. Some codes, which also use local instantaneous description, have been developed specifically for nuclear applications [68]. Although many CFD codes are multi-purpose, some of them still lack some models, which are necessary for adequate simulations of containment phenomena. In particular, the modelling of steam condensation often has to be incorporated in the codes by the users. These theoretical investigations are complemented by adequate experiments. Recently, the following novel integral experimental facilities have been set up in Europe: TOSQAN [9,10], at the Institut de Radioprotection et de Sureté Nucléaire (IRSN) in Saclay (France), MISTRA [9,11], at the
A Review And Evaluations Of Shortest Path Algorithms
Nowadays, in computer networks, the routing is based on the shortest path problem. This will help in minimizing the overall costs of setting up computer networks. New technologies such as map-related systems are also applying the shortest path problem. This paper’s main objective is to evaluate the Dijkstra’s Algorithm, Floyd-Warshall Algorithm, Bellman-Ford Algorithm, and Genetic Algorithm (GA) in solving the shortest path problem. A short review is performed on the various types of shortest path algorithms. Further explanations and implementations of the algorithms are illustrated in graphical forms to show how each of the algorithms works. A framework of the GA for finding optimal solutions to the shortest path problem is presented. The results of evaluating the Dijkstra’s, Floyd-Warshall and Bellman-Ford algorithms along with their time complexity conclude the paper.
Predatory Trading ∗
This paper studies predatory trading : trading that induces and/or exploits other investors’ need to reduce their positions. We show that if one trader needs to sell, others also sell and subsequently buy back the asset. This leads to price overshooting and a reduced liquidation value for the distressed trader. Hence, the market is illiquid when liquidity is most needed. Further, a trader profits from triggering another trader’s crisis, and the crisis can spill over across traders and across markets.
Putting Things in Context: Community-specific Embedding Projections for Sentiment Analysis
Variation in language is ubiquitous, and is particularly evident in newer forms of writing such as social media. Fortunately, variation is not random, but is usually linked to social factors. By exploiting linguistic homophily — the tendency of socially linked individuals to use language similarly — it is possible to build models that are more robust to variation. In this paper, we focus on social network communities, which make it possible to generalize sociolinguistic properties from authors in the training set to authors in the test sets, without requiring demographic author metadata. We detect communities via standard graph clustering algorithms, and then exploit these communities by learning community-specific projections of word embeddings. These projections capture shifts in word meaning in different social groups; by modeling them, we are able to improve the overall accuracy of Twitter sentiment analysis by a significant margin over competitive prior work.
Overview of data quality challenges in the context of Big Data
Data quality management systems are thoroughly researched topics and have resulted in many tools and techniques developed by both academia and industry. However, the advent of Big Data might pose some serious questions pertaining to the applicability of existing data quality concepts. There is a debate concerning the importance of data quality for Big Data; one school of thought argues that high data quality methods are essential for deriving higher level analytics while another school of thought argues that data quality level will not be so important as the volume of Big Data would be used to produce patterns and some amount of dirty data will not mask the analytic results which might be derived. This paper aims to investigate various components and activities forming part of data quality management such as dimensions, metrics, data quality rules, data profiling and data cleansing. The result list existing challenges and future research areas associated with Big Data for data quality management.
SlangSD: Building and Using a Sentiment Dictionary of Slang Words for Short-Text Sentiment Classification
Sentiment in social media is increasingly considered as an important resource for customer segmentation, market understanding, and tackling other socio-economic issues. However, sentiment in social media is difficult to measure since user-generated content is usually short and informal. Although many traditional sentiment analysis methods have been proposed, identifying slang sentiment words remains untackled. One of the reasons is that slang sentiment words are not available in existing dictionaries or sentiment lexicons. To this end, we propose to build the first sentiment dictionary of slang words to aid sentiment analysis of social media content. It is laborious and time-consuming to collect and label the sentiment polarity of a comprehensive list of slang words. We present an approach to leverage web resources to construct an extensive Slang Sentiment word Dictionary (SlangSD) that is easy to maintain and extend. SlangSD is publicly available1 for research purposes. We empirically show the advantages of using SlangSD, the newly-built slang sentiment word dictionary for sentiment classification, and provide examples demonstrating its ease of use with an existing sentiment system.
Rilpivirine versus efavirenz with tenofovir and emtricitabine in treatment-naive adults infected with HIV-1 (ECHO): a phase 3 randomised double-blind active-controlled trial
BACKGROUND Efavirenz with tenofovir-disoproxil-fumarate and emtricitabine is a preferred antiretroviral regimen for treatment-naive patients infected with HIV-1. Rilpivirine, a new non-nucleoside reverse transcriptase inhibitor, has shown similar antiviral efficacy to efavirenz in a phase 2b trial with two nucleoside/nucleotide reverse transcriptase inhibitors. We aimed to assess the efficacy, safety, and tolerability of rilpivirine versus efavirenz, each combined with tenofovir-disoproxil-fumarate and emtricitabine. METHODS We did a phase 3, randomised, double-blind, double-dummy, active-controlled trial, in patients infected with HIV-1 who were treatment-naive. The patients were aged 18 years or older with a plasma viral load at screening of 5000 copies per mL or greater, and viral sensitivity to all study drugs. Our trial was done at 112 sites across 21 countries. Patients were randomly assigned by a computer-generated interactive web response system to receive either once-daily 25 mg rilpivirine or once-daily 600 mg efavirenz, each with tenofovir-disoproxil-fumarate and emtricitabine. Our primary objective was to show non-inferiority (12% margin) of rilpivirine to efavirenz in terms of the percentage of patients with confirmed response (viral load <50 copies per mL intention-to-treat time-to-loss-of-virological-response [ITT-TLOVR] algorithm) at week 48. Our primary analysis was by intention-to-treat. We also used logistic regression to adjust for baseline viral load. This trial is registered with ClinicalTrials.gov, number NCT00540449. FINDINGS 346 patients were randomly assigned to receive rilpivirine and 344 to receive efavirenz and received at least one dose of study drug, with 287 (83%) and 285 (83%) in the respective groups having a confirmed response at week 48. The point estimate from a logistic regression model for the percentage difference in response was -0.4 (95% CI -5.9 to 5.2), confirming non-inferiority with a 12% margin (primary endpoint). The incidence of virological failures was 13% (rilpivirine) versus 6% (efavirenz; 11%vs 4% by ITT-TLOVR). Grade 2-4 adverse events (55 [16%] on rilpivirine vs 108 [31%] on efavirenz, p<0.0001), discontinuations due to adverse events (eight [2%] on rilpivirine vs 27 [8%] on efavirenz), rash, dizziness, and abnormal dreams or nightmares were more common with efavirenz. Increases in plasma lipids were significantly lower with rilpivirine. INTERPRETATION Rilpivirine showed non-inferior efficacy compared with efavirenz, with a higher virological-failure rate, but a more favourable safety and tolerability profile. FUNDING Tibotec.
Vertical Mammaplasty for Gigantomastia
A 48-year-old female patient presented with gigantomastia. The sternal notch-nipple distance was 55 cm for the right breast and 50 cm for the left. Vertical mammaplasty based on the superior pedicle was performed. The resected tissue weighed 3400 g for the right breast and 2800 g for the left breast. The outcome was excellent with respect to symmetry, shape, size, residual scars, and sensitivity of the nipple-areola complex. Longer pedicles or larger resections were not found in the literature on vertical mammaplasty applications. In our opinion, by using the vertical mammaplasty technique in gigantomastia it is possible to achieve a well-projecting shape and preserve NAC sensitivity.
Topological Spatial Verification for Instance Search
This paper proposes an elastic spatial verification method for Instance Search, particularly for dealing with non-planar and non-rigid queries exhibiting complex spatial transformations. Different from existing models that map keypoints between images based on a linear transformation (e.g., affine, homography), our model exploits the topological arrangement of keypoints to address the non-linear spatial transformations that are extremely common in real life situations. In particular, we propose a novel technique to elastically verify the topological spatial consistency with the triangulated graph through a “sketch-and-match” scheme. The spatial topology configuration, emphasizing relative positioning rather than absolute coordinates, is first sketched by a triangulated graph, whose edges essentially capture the topological layout of the corresponding keypoints. Next, the spatial consistency is efficiently estimated as the number of common edges between the triangulated graphs. Compared to the existing methods, our technique is much more effective in modeling the complex spatial transformations of non-planar and non-rigid instances, while being compatible to instances with simple linear transformations. Moreover, our method is by nature more robust in spatial verification by considering the locations, rather than the local geometry of keypoints, which are sensitive to motions and viewpoint changes. We evaluate our method extensively on three years of TRECVID datasets, as well as our own dataset MQA, showing large improvement over other methods for the task of Instance Search.
ApLeaf: An efficient android-based plant leaf identification system
To automatically identify plant species is very useful for ecologists, amateur botanists, educators, and so on. The Leafsnap is the first successful mobile application system which tackles this problem. However, the Leafsnap is based on the IOS platform. And to the best of our knowledge, as the mobile operation system, the Android is more popular than the IOS. In this paper, an Android-based mobile application designed to automatically identify plant species according to the photographs of tree leaves is described. In this application, one leaf image can be either a digital image from one existing leaf image database or a picture collected by a camera. The picture should be a single leaf placed on a light and untextured background without other clutter. The identification process consists of three steps: leaf image segmentation, feature extraction, and species identification. The demo system is evaluated on the ImageCLEF2012 Plant Identification database which contains 126 tree species from the French Mediterranean area. The outputs of the system to users are the top several species which match the query leaf image the best, as well as the textual descriptions and additional images about plant leaves, flowers, etc. Our system works well with state-of-the-art identification performance.
Analysis of Obstacle-Climbing Capability of Planetary Exploration Rover with Rocker-Bogie Structure
Wheeled mobile robots are increasingly being utilized in unknown and dangerous situations such as planetary surface exploration. Based on force analysis of the differential joints and force analysis between the wheels and the ground, this paper established the quasi-static mathematical model of the 6-wheel mobile system of planetary exploration rover with rocker-bogie structure. Considering the constraint conditions, with the method of finding the wheels’friction force solution space feasible region, obstacle-climbing capability of the mobile mechanism was analyzed. Given the same obstacle heights and contact angles of wheel-ground, the single side forward obstacle-climbing of the wheels was simulated respectively, and the results show that the rear wheel has the best obstacle-climbing capability, the middle wheel is the worst, and the front wheel is moderate.
3D reconstruction based on SIFT and Harris feature points
This paper presents a new 3D reconstruction method using feature points extracted by the SIFT and Harris corner detector. Since the SIFT feature points can be detected stably and relatively accurately, the proposed algorithm first uses the SIFT matching points to calculate the fundamental matrix. On the other hand many of the feature points detected by the SIFT are not what we need for reconstruction, so by combining the SIFT feature points with the Harris corners it is possible to obtain more vivid and detailed 3D information. Experiments have been conducted to validate the proposed method.
Using Emotions as Intrinsic Motivation to Accelerate Classic Reinforcement Learning
Aiming at the need for autonomous learning in reinforcement learning (RL), a quantitative emotion-based motivation model is proposed by introducing psychological emotional factors as the intrinsic motivation. The curiosity is used to promote or hold back agents' exploration of unknown states, the happiness index is used to determine the current state-action's happiness level, the control power is used to indicate agents' control ability over its surrounding environment, and together to adjust agents' learning preferences and behavioral patterns. To combine intrinsic emotional motivations with classic RL, two methods are proposed. The first method is to use the intrinsic emotional motivations to explore unknown environment and learn the environment transitioning model ahead of time, while the second method is to combine intrinsic emotional motivations with external rewards as the ultimate joint reward function, directly to drive agents' learning. As the result shows, in the simulation experiments in the rat foraging in maze scenario, both methods have achieved relatively good performance, compared with classic RL purely driven by external rewards.
A Comprehensive Study Into Intrabody Communication Measurements
One of the main objectives of research into intrabody communication (IBC) is the characterization of the human body as a transmission medium for electrical signals. However, such characterization is strongly influenced by the conditions under which the experiments are performed. In addition, the outcomes reported in the literature vary according to the measurement method used, frequently making comparisons among them unfeasible. Further studies are still required in order to establish a methodology for IBC characterization and design. In this paper, both galvanic and capacitive coupling setups have been implemented and a comprehensive set of measurements has been carried out by analyzing fundamental IBC parameters such as optimum frequency range, maximum channel length, and type of electrodes, among others. Consequently, practical conclusions regarding the experimental conditions that optimize IBC performance for each coupling technique have been obtained.
IoT based monitoring and control system for home automation
The project proposes an efficient implementation for IoT (Internet of Things) used for monitoring and controlling the home appliances via World Wide Web. Home automation system uses the portable devices as a user interface. They can communicate with home automation network through an Internet gateway, by means of low power communication protocols like Zigbee, Wi-Fi etc. This project aims at controlling home appliances via Smartphone using Wi-Fi as communication protocol and raspberry pi as server system. The user here will move directly with the system through a web-based interface over the web, whereas home appliances like lights, fan and door lock are remotely controlled through easy website. An extra feature that enhances the facet of protection from fireplace accidents is its capability of sleuthing the smoke in order that within the event of any fireplace, associates an alerting message and an image is sent to Smartphone. The server will be interfaced with relay hardware circuits that control the appliances running at home. The communication with server allows the user to select the appropriate device. The communication with server permits the user to pick out the acceptable device. The server communicates with the corresponding relays. If the web affiliation is down or the server isn't up, the embedded system board still will manage and operate the appliances domestically. By this we provide a climbable and price effective Home Automation system.
FeedMe: a collaborative alert filtering system
As the number of alerts generated by collaborative applications grows, users receive more unwanted alerts. FeedMe is a general alert management system based on XML feed protocols such as RSS and ATOM. In addition to traditional rule-based alert filtering, FeedMe uses techniques from machine-learning to infer alert preferences based on user feedback. In this paper, we present and evaluate a new collaborative naïve Bayes filtering algorithm. Using FeedMe, we collected alert ratings from 33 users over 29 days. We used the data to design and verify the accuracy of the filtering algorithm and provide insights into alert prediction.
Divergence measures and message passing
This paper presents a unifying view of messagepassing algorithms, as methods to approximate a complex Bayesian network by a simpler network with minimum information divergence. In this view, the difference between mean-field methods and belief propagation is not the amount of structure they model, but only the measure of loss they minimize (‘exclusive’ versus ‘inclusive’ Kullback-Leibler divergence). In each case, message-passing arises by minimizing a localized version of the divergence, local to each factor. By examining these divergence measures, we can intuit the types of solution they prefer (symmetry-breaking, for example) and their suitability for different tasks. Furthermore, by considering a wider variety of divergence measures (such as alpha-divergences), we can achieve different complexity and performance goals.
Narcissism and Romantic Attraction
A model of narcissism and romantic attraction predicts that narcissists will be attracted to admiring individuals and highly positive individuals and relatively less attracted to individuals who offer the potential for emotional intimacy. Five studies supported this model. Narcissists, compared with nonnarcissists, preferred more self-oriented (i.e., highly positive) and less other-oriented (i.e., caring) qualities in an ideal romantic partner (Study 1). Narcissists were also relatively more attracted to admiring and highly positive hypothetical targets and less attracted to caring targets (Studies 2 and 3). Indeed, narcissists displayed a preference for highly positive-noncaring targets compared with caring but not highly positive targets (Study 4). Finally, mediational analyses demonstrated that narcissists' romantic attraction is, in part, the result of a strategy for enhancing self-esteem (Study 5).
Multigranulation decision-theoretic rough sets
Article history: Available online 23 March 2013
Merkel cell carcinoma.
Merkel cell carcinoma (MCC) is a highly aggressive neuroendocrine carcinoma of the skin. The incidence of this rare tumor is increasing rapidly; the American Cancer Society estimates for 2008 almost 1500 new cases in the USA. Thus, the incidence of MCC will exceed the incidence of cutaneous T-cell lymphoma. Moreover, the mortality rate of MCC at 33% is considerably higher than that of cutaneous melanoma. These clinical observations are especially disturbing as we are only recently beginning to understand the pathogenesis of MCC. For the same reason, the therapeutic approach is often unclear; reliable data are only available for the therapy of locoregional disease.
Anastomic leak in colorectal cancer surgery. Development of a diagnostic index (DIACOLE).
BACKGROUND We have obtained a diagnostic score (DIACOLE) in order to detect anastomotic leakage in the postoperative period of colorectal cancer surgery. METHODS Systematic review to identify any symptoms and clinical or analytical signs associated with anastomotic leakage after colorectal cancer surgery and a meta-analysis of each of these factors. The DIACOLE score encompasses all factors that reached statistical significance in their respective meta-analyses. The value of each factor in the score was determined depending the Napierian logarithm of the odds ratios. The index was validated using collected data at our institution. RESULTS We identified 13 potential signs and symptoms of anastomotic leakage to elaborate the DIACOLE score. The predictive power of the DIACOLE was validated in a case-control study, resulting in an Area Under Curve (AUC) of 0.911 and a 95% confidence interval. These values were considered indicative of a very good diagnostic score. CONCLUSIONS If DIACOLE score is > 3.065, a blood count and re-evaluating the score daily are recommended. If the DIACOLE>5.436, a radiological test is advised. We have developed free software to obtain DIACOLE value.
Current-induced polarization and the spin Hall effect at room temperature.
Electrically induced electron spin polarization is imaged in n-type ZnSe epilayers using Kerr rotation spectroscopy. Despite no evidence for an electrically induced internal magnetic field, current-induced in-plane spin polarization is observed with characteristic spin lifetimes that decrease with doping density. The spin Hall effect is also observed, indicated by an electrically induced out-of-plane spin polarization with opposite sign for spins accumulating on opposite edges of the sample. The spin Hall conductivity is estimated as 3+/-1.5 Omega(-1) m(-1)/|e| at 20 K, which is consistent with the extrinsic mechanism. Both the current-induced spin polarization and the spin Hall effect are observed at temperatures from 10 to 295 K.
Trimming and Improving Skip-thought Vectors
The skip-thought model has been proven to be effective at learning sentence representations and capturing sentence semantics. In this paper, we propose a suite of techniques to trim and improve it. First, we validate a hypothesis that, given a current sentence, inferring the previous and inferring the next sentence provide similar supervision power, therefore only one decoder for predicting the next sentence is preserved in our trimmed skip-thought model. Second, we present a connection layer between encoder and decoder to help the model to generalize better on semantic relatedness tasks. Third, we found that a good word embedding initialization is also essential for learning better sentence representations. We train our model unsupervised on a large corpus with contiguous sentences, and then evaluate the trained model on 7 supervised tasks, which includes semantic relatedness, paraphrase detection, and text classification benchmarks. We empirically show that, our proposed model is a faster, lighter-weight and equally powerful alternative to the original skip-thought model.