title
stringlengths
8
300
abstract
stringlengths
0
10k
A Double-Blind, Placebo-Controlled Trial of Modafinil for Cocaine Dependence
Despite years of active research, there are still no approved medications for the treatment of cocaine dependence. Modafinil is a glutamate-enhancing agent that blunts cocaine euphoria under controlled conditions, and the current study assessed whether modafinil would improve clinical outcome in cocaine-dependent patients receiving standardized psychosocial treatment. This was a randomized, double-blind, placebo-controlled trial conducted at a university outpatient center (from 2002 to 2003) on a consecutive sample of 62 (predominantly African American) cocaine-dependent patients (aged 25–63) free of significant medical and psychiatric conditions. After screening, eligible patients were randomized to a single morning dose of modafinil (400 mg), or matching placebo tablets, for 8 weeks while receiving manual-guided, twice-weekly cognitive behavioral therapy. The primary efficacy measure was cocaine abstinence based on urine benzoylecgonine levels. Secondary measures were craving, cocaine withdrawal, retention, and adverse events. Modafinil-treated patients provided significantly more BE-negative urine samples (p=0.03) over the 8-week trial when compared to placebos, and were more likely to achieve a protracted period (⩾3 weeks) of cocaine abstinence (p=0.05). There were no serious adverse events, and none of the patients failed to complete the study as a result of adverse events. This study provides preliminary evidence, which should be confirmed by a larger study, that modafinil improves clinical outcome when combined with psychosocial treatment for cocaine dependence.
Task-Based Authorization Controls (TBAC): A Family of Models for Active and Enterprise-Oriented Autorization Management
In this paper, we develop a new paradigm for access ontrol and authorization management, called task-based authorization control s (TBAC). TBAC models access controls from a task-oriented perspective th an e traditional subject-object one. Access mediation now involves authorizations at various points during the completion of tasks in accordance with some applica tion logic. By taking a taskoriented view of access control and authorizations, TBAC lays the foundation for research into a new breed of “active” security mode ls that are required for agentbased distributed computing and workflow management .
Uni- und bikondylärer Oberflächenersatz
Die Patientenspezifische Instrumentierung (PSI Zimmer®) hat das Ziel, die Auswahl und Positionierung der Komponenten sowie den Operationsablauf beim bi- und unikondylären Kniegelenkersatz zu optimieren. Inhalt der Arbeit ist die Darstellung der spezifischen Planung und OP-Technik sowie die Bewertung des Verfahrens anhand eigener Ergebnisse und der Literatur. Mithilfe einer MRT oder CT wird ein virtuelles 3‑D-Gelenkmodell erstellt, an dem die Implantation simuliert wird. Anhand der Planungsdaten werden Schablonen gefertigt, die intraoperativ die Planung auf den Situs übertragen. In einer prospektiven Vergleichsstudie wurden 88 Patienten (je 44) mit dem MRT-basierten PSI-Verfahren bzw. konventionell mit dem gleichen bikondylären Gelenkersatz versorgt. Erfasst wurden intraoperative Änderungen, Anzahl der Siebe, OP-Zeit, sowie bei 70 Patienten die radiologischen Ergebnisse anhand von Ganzbeinaufnahmen. Bei 17 weiteren Patienten kam das Verfahren bei der Implantation einer Schlittenprothese zur Anwendung. Die Planung ließ anatomische Besonderheiten bereits präoperativ erkennen und berücksichtigen. Die Anzahl der Siebe konnte reduziert werden. Die Vorhersagbarkeit der Komponentengröße war in der PSI-Gruppe präziser, intraoperative Änderungen waren nur für die orthograde Femur- (25 %) und Tibiaresektion (36 %) sowie die Tibiarotation (40 %) erforderlich. Das Alignement war in der PSI Gruppe besser. Das Verfahren erwies sich als praktikabel und zuverlässig. Die Vorteile der genauen Planung wurden evident. In der Literatur sind die Resultate mit PSI bezüglich des Alignements inkonsistent. Die Weichteilbalancierung kann bisher nur eingeschränkt einfließen. Das Verfahren steht am Anfang einer Entwicklung und hat Optimierungspotenzial. The objective of patient-specific instrumentation (PSI Zimmer®) technology is to optimize positioning and selection of components as well as surgical procedure in uni- and bicompartimental knee replacement. The article contains a description of the planning and surgical technique and evaluates the method based on own results and literature. Using MRI or CT scans a virtual 3D model of the joint is created in order to simulate and plan the implant positioning. According to these data, pin placement and/or cutting guides are produced, which enable the surgeon to transfer the planning to the surgical procedure. In a prospective comparative study 88 patients (44 per each of the two techniques) were operated by one surgeon receiving the same TKA using either MRI-based PSI or a conventional technique. The number of surgical trays, operating time, intraoperative changes and frontal alignment using a full leg x‑ray (70 cases) were compared. In 17 patients the method was applied with unicondylar knee replacement. Anatomical abnormalities could be detected preoperatively and considered during the operation. With PSI the number of trays could be reduced and predictability of the component size was more precise. Intraoperative changes became necessary only for distal femoral (25 %) and proximal tibial (36 %) resection and tibial rotation (40 %). Alignment was more precise in the PSI cases PSI using the applied technique proved to be practicable and reliable. The advantages of precise planning became obvious. Results concerning alignment are inconsistent in the literature. Soft tissue balancing has only been included in the technique to a limited degree so far. PSI is still in an early stage of development and further development opportunities should be exploited before final assessment.
Functional magnetic resonance imaging (fMRI) “brain reading”: detecting and classifying distributed patterns of fMRI activity in human visual cortex
Traditional (univariate) analysis of functional MRI (fMRI) data relies exclusively on the information contained in the time course of individual voxels. Multivariate analyses can take advantage of the information contained in activity patterns across space, from multiple voxels. Such analyses have the potential to greatly expand the amount of information extracted from fMRI data sets. In the present study, multivariate statistical pattern recognition methods, including linear discriminant analysis and support vector machines, were used to classify patterns of fMRI activation evoked by the visual presentation of various categories of objects. Classifiers were trained using data from voxels in predefined regions of interest during a subset of trials for each subject individually. Classification of subsequently collected fMRI data was attempted according to the similarity of activation patterns to prior training examples. Classification was done using only small amounts of data (20 s worth) at a time, so such a technique could, in principle, be used to extract information about a subject's percept on a near real-time basis. Classifiers trained on data acquired during one session were equally accurate in classifying data collected within the same session and across sessions separated by more than a week, in the same subject. Although the highest classification accuracies were obtained using patterns of activity including lower visual areas as input, classification accuracies well above chance were achieved using regions of interest restricted to higher-order object-selective visual areas. In contrast to typical fMRI data analysis, in which hours of data across many subjects are averaged to reveal slight differences in activation, the use of pattern recognition methods allows a subtle 10-way discrimination to be performed on an essentially trial-by-trial basis within individuals, demonstrating that fMRI data contain far more information than is typically appreciated.
Ipriflavone in the treatment of postmenopausal osteoporosis: a randomized controlled trial.
CONTEXT Data on the efficacy and safety of ipriflavone for prevention of postmenopausal bone loss are conflicting. OBJECTIVES To investigate the effect of oral ipriflavone on prevention of postmenopausal bone loss and to assess the safety profile of long-term treatment with ipriflavone in postmenopausal osteoporotic women. DESIGN AND SETTING Prospective, randomized, double-blind, placebo-controlled, 4-year study conducted in 4 centers in Belgium, Denmark, and Italy from August 1994 to July 1998. PARTICIPANTS Four hundred seventy-four postmenopausal white women, aged 45 to 75 years, with bone mineral densities (BMDs) of less than 0.86 g/cm(2). INTERVENTIONS Patients were randomly assigned to receive ipriflavone, 200 mg 3 times per day (n = 234), or placebo (n = 240); all received 500 mg/d of calcium. MAIN OUTCOME MEASURES Efficacy measures included spine, hip, and forearm BMD and biochemical markers of bone resorption (urinary hydroxyproline corrected for creatinine and urinary CrossLaps [Osteometer Biotech, Herlev, Denmark] corrected for creatinine), assessed every 6 months. Laboratory safety measures and adverse events were recorded every 3 months. RESULTS Based on intent-to-treat analysis, after 36 months of treatment, the annual percentage change from baseline in BMD of the lumbar spine for ipriflavone vs placebo (0.1% [95% confidence interval (CI), -7.9% to 8.1%] vs 0.8% [95% CI, -9.1% to 10.7%]; P =.14), or in any of the other sites measured, did not differ significantly between groups. The response in biochemical markers was also similar between groups (eg, for hydroxyproline corrected for creatinine, 20.13 mg/g [95% CI, 18.85-21.41 mg/g] vs 20.67 mg/g [95% CI, 19.41-21.92 mg/g]; P =.96); urinary CrossLaps corrected for creatinine, 268 mg/mol (95% CI, 249-288 mg/mol) vs 268 mg/mol (95% CI, 254-282 mg/mol); P =.81. The number of women with new vertebral fracture was identical or nearly so in the 2 groups at all time points. Lymphocyte concentrations decreased significantly (500/microL (0.5 x 10(9)/L]) in women treated with ipriflavone. Thirty-one women (13.2%) in the ipriflavone group developed subclinical lymphocytopenia, of whom 29 developed it during ipriflavone treatment. Of these, 15 (52%) of 29 had recovered spontaneously by 1 year and 22 (81%) of 29 by 2 years. CONCLUSIONS Our data indicate that ipriflavone does not prevent bone loss or affect biochemical markers of bone metabolism. Additionally, ipriflavone induces lymphocytopenia in a significant number of women.
Matisse: An Architectural Design Tool for Commodity ICs
To accelerate industrial adoption of behavioral synthesis, we have developed Matisse, an architectural design tool that increases productivity without sacrificing area, performance, or power. Matisse's main difference from traditional behavioral synthesis tools is that it lets the designer play a key role. It allows the designer to make major decisions about styles, protocols, parallelism, delays, and partial or even complete architectures before the behavioral synthesis phase starts. Then it enables the designer to incorporate these decisions into the architecture using behavioral synthesis. Matisse supports the diverse design practices required for commodity IC design by giving the designer fine-grain control of behavioral synthesis tasks.
An interior-point stochastic approximation method and an L1-regularized delta rule
The stochastic approximation method is behind the solution to many important, actively-studied problems in machine learning. Despite its farreaching application, there is almost no work on applying stochastic approximation to learning problems with general constraints. The reason for this, we hypothesize, is that no robust, widely-applicable stochastic approximation method exists for handling such problems. We propose that interior-point methods are a natural solution. We establish the stability of a stochastic interior-point approximation method both analytically and empirically, and demonstrate its utility by deriving an on-line learning algorithm that also performs feature selection via L1 regularization.
Patterned genital injury in cases of rape--a case-control study.
A pattern of genital injury that separates trauma seen in sexual assault cases from trauma seen following consensual sexual intercourse has been a matter of debate. This study aimed at clarifying the question by eliminating as many confounders as possible in a prospective, case-control setup. A total of 98 controls and 39 cases were examined using the naked eye, the colposcope and toluidine blue dye followed by colposcopy. The overall frequency of having at least one lesion was strikingly similar in the two groups, but cases had significantly more abrasions, a trend towards more haematomas and a higher frequency of multiple lesions. Cases had a higher frequency of lesions in locations other than the 6 o'clock position. Our data suggests that cases have larger, more complex lesions. In conclusion, this study has confirmed the existence of different patterns of genital lesions. Background data for detection of genital lesions using the three most commonly used techniques is provided. These results will aid in the interpretation of findings seen when examining sexual assault victims.
Neural EEG-Speech Models
In this paper, we describe three neural network (NN) based EEG-Speech (NES) models that map the unspoken EEG signals to the corresponding phonemes. Instead of using conventional feature extraction techniques, the proposed NES models rely on graphic learning to project both EEG and speech signals into deep representation feature spaces. This NN based linear projection helps to realize multimodal data fusion (i.e., EEG and acoustic signals). It is convenient to construct the mapping between unspoken EEG signals and phonemes. Specifically, among three NES models, two augmented models (i.e., IANES-B and IANES-G) include spoken EEG signals as either bias or gate information to strengthen the feature learning and translation of unspoken EEG signals. A combined unsupervised and supervised training is implemented stepwise to learn the mapping for all three NES models. To enhance the computational performance, three way factored NN training technique is applied to IANES-G model. Unlike many existing methods, our augmented NES models incorporate spoken-EEG signals that can efficiently suppress the artifacts in unspoken-EEG signals. Experimental results reveal that all three proposed NES models outperform the baseline SVM method, whereas IANES-G demonstrates the best performance on speech recovery and classification task comparatively.
Exposure to virtual social interactions in the treatment of social anxiety disorder: A randomized controlled trial.
This randomized controlled trial investigated the efficacy of a stand-alone virtual reality exposure intervention comprising verbal interaction with virtual humans to target heterogeneous social fears in participants with social anxiety disorder. Sixty participants (Mage = 36.9 years; 63.3% women) diagnosed with social anxiety disorder were randomly assigned to individual virtual reality exposure therapy (VRET), individual in vivo exposure therapy (iVET), or waiting-list. Multilevel regression analyses revealed that both treatment groups improved from pre-to postassessment on social anxiety symptoms, speech duration, perceived stress, and avoidant personality disorder related beliefs when compared to the waiting-list. Participants receiving iVET, but not VRET, improved on fear of negative evaluation, speech performance, general anxiety, depression, and quality of life relative to those on waiting-list. The iVET condition was further superior to the VRET condition regarding decreases in social anxiety symptoms at post- and follow-up assessments, and avoidant personality disorder related beliefs at follow-up. At follow-up, all improvements were significant for iVET. For VRET, only the effect for perceived stress was significant. VRET containing extensive verbal interaction without any cognitive components can effectively reduce complaints of generalized social anxiety disorder. Future technological and psychological improvements of virtual social interactions might further enhance the efficacy of VRET for social anxiety disorder.
Brain Storm Optimization Algorithm
Human being is the most intelligent animal in this world. Intuitively, optimization algorithm inspired by human being creative problem solving process should be superior to the optimization algorithms inspired by collective behavior of insects like ants, bee, etc. In this paper, we introduce a novel brain storm optimization algorithm, which was inspired by the human brainstorming process. Two benchmark functions were tested to validate the effectiveness and usefulness of the proposed algorithm.
On winding design of a high performance ferrite motor for traction application
Design of low cost traction motors with ferrite magnets needs to meet challenges such as minimizing the risk of demagnetization, and maximizing the torque and power density, via a suitable choice of rotor and stator winding topology and parameters. With regards to the stator, distributed and concentrated windings may have both advantages and disadvantages when considering manufacturing cost, slot fill factor, the contribution factor of reluctance torque and parasitic effects. Furthermore, the trend toward high speed operation of the traction motors may increase the AC loss effects in the windings, contributing to motor deficiencies and risk of thermal failure. In this paper, the performance of a high speed ferrite motor with a distributed and concentrated wound stator, and with regards to torque and power performance as well AC loss effects is assessed. The dynamic performance of a full scale prototype design based on a distributed aluminum wound stator is presented.
Lifetime Estimation of the Nanophosphate $\hbox{LiFePO}_{4}\hbox{/C}$ Battery Chemistry Used in Fully Electric Vehicles
There are currently many different lithium ion (Li-ion) chemistries available on the market, and several new players are in the research and development process; however, none of them is superior to the other chemistries in all aspects. Relatively low price, long cycle and calendar lifetime, and intrinsic safety of the nanophosphate LiFePO4/C Li-ion chemistry make it possible to consider this chemistry for electric vehicle (EV) applications. This paper investigates the lifetime of the nanophosphate LiFePO4/C battery chemistry when it is used for full electrical vehicles. The investigation is performed considering a semiempirical calendar and cycle lifetime model, which was developed based on extended accelerated lifetime tests. Both capacity and power capability degradations during calendar and cycle life aging are considered and quantified. Finally, the developed battery cell lifetime model is used to study the capacity and power capability degradation behavior of the tested nanophosphate LiFePO4/C battery for two EV operational scenarios.
Integrating Natural Language Processing and Software Engineering
This paper tries to put various ways in which Natural Language Processing (NLP) and Software Engineering (SE) can be seen as inter-disciplinary research areas. We survey the current literature, with the aim of assessing use of Software Engineering and Natural Language Processing tools in the researches undertaken. An assessment of how various phases of SDLC can employ NLP techniques is presented. The paper also provides the justification of the use of text for automating or combining both these areas. A short research direction while undertaking multidisciplinary research is also provided.
Home automation using GSM
In recent years, the home environment has seen a rapid introduction of network enabled digital technology. This technology offers new and exciting opportunities to increase the connectivity of devices within the home for the purpose of home automation. Moreover, with the rapid expansion of the Internet, there is the added potential for the remote control and monitoring of such network enabled devices. However, the adoption of home automation systems has been slow. This paper identifies the reasons for this slow adoption and evaluates the potential of Zig-Bee for addressing these problems through the design and implementation of flexible home automation architecture. Device control is a process that is done in the day to day life of mankind. Usually there are a number of devices associated with home and an efficient control of these systems is a tedious task. The rapidly advancing mobile communication technology and the decrease in costs make it possible to incorporate mobile technology into home automation systems.
Propositional Representation of Arithmetic Proofs (Preliminary Version)
Equations f@@@@ &equil; g@@@@ between polynomial time computable functions can be represented by sets of propositional formulas. If f@@@@ &equil; g@@@@ is provable in certain arithmetic systems, then polynomial length proofs of the representing formulas exist in certain propositional systems. Two cases of this phenomenon and a general theory are given.
LoSt: location based storage
For certain types of sensitive data (such as health records) it is important to know the geographic location of the file, e.g. that it is stored on servers within the USA. This is particularly important for determining applicable laws and regulations. In this paper we discuss the problem of verifying the location of files within distributed file storage systems such as the cloud. We consider a general setup for a distributed storage system and show that verifying location when such a system is fully malicious, is impossible. We then make plausible assumptions about the behavior of the system and provide a formal definition for Proofs of Location (PoL) in our setting. We show secure and efficient PoL schemes can be constructed by using a geolocation scheme and a Proof of Retrievability (PoR) scheme with a new added property that we call re-coding, which is of independent interest.
Metabolic profile of high intensity intermittent exercises.
To evaluate the magnitude of the stress on the aerobic and the anaerobic energy release systems during high intensity bicycle training, two commonly used protocols (IE1 and IE2) were examined during bicycling. IE1 consisted of one set of 6-7 bouts of 20-s exercise at an intensity of approximately 170% of the subject's maximal oxygen uptake (VO2max) with a 10-s rest between each bout. IE2 involved one set of 4-5 bouts of 30-s exercise at an intensity of approximately 200% of the subject's VO2max and a 2-min rest between each bout. The accumulated oxygen deficit of IE1 (69 +/- 8 ml.kg-1, mean +/- SD) was significantly higher than that of IE2 (46 +/- 12 ml.kg-1, N = 9, p < 0.01). The accumulated oxygen deficit of IE1 was not significantly different from the maximal accumulated oxygen deficit (the anaerobic capacity) of the subjects (69 +/- 10 ml.kg-1), whereas the corresponding value for IE2 was less than the subjects' maximal accumulated oxygen deficit (P < 0.01). The peak oxygen uptake during the last 10 s of the IE1 (55 +/- 6 ml.kg-1.min-1) was not significantly less than the VO2max of the subjects (57 +/- 6 ml.kg-1.min-1). The peak oxygen uptake during the last 10 s of IE2 (47 +/- 8 ml.kg-1.min-1) was lower than the VO2max (P < 0.01). In conclusion, this study showed that intermittent exercise defined by the IE1 protocol may tax both the anaerobic and aerobic energy releasing systems almost maximally.
Multicenter randomized controlled trial of a home walking intervention after outpatient cardiac rehabilitation on health-related quality of life in women.
BACKGROUND Poor health-related quality of life (HRQL) has been shown to be predictive of adverse outcomes in cardiac patients. As women with coronary heart disease have been shown to have lower HRQL than men with coronary heart disease, women are at greater risk of a poor clinical outcome. This study tested the effect of a 12-week home walking intervention after completion of outpatient cardiac rehabilitation (OCR) on HRQL and maintenance of physical activity among women. DESIGN Multicenter two-group randomized trial. METHODS After completion of OCR, participants were randomly allocated to the intervention or usual care groups. The outcomes were HRQL (assessed using the MacNew Heart Disease HRQL instrument) and self-reported physical activity (assessed using the Stages of Change model of exercise behavior) at 3, 6, and 12 months after OCR. RESULTS Seventy-two women were randomized to the intervention and 81 to usual care. Attrition was greater in the treatment group (13 vs. 1%). HRQL scores increased relative to the base level in both arms and were significantly higher in the intervention group at 6 months, but not at 3 or 12 months. Maintenance of physical activity declined over time in both groups, however, this decline was significantly reduced among women in the intervention group. CONCLUSION HRQL improved in both groups, but seemed to increase earlier among women in the intervention group. As maintenance of physical activity was higher among women in the intervention group, this minimal intervention could be used to facilitate women's progression from supervised to independent exercise.
The Tri-Wheel: A novel wheel-leg mobility concept
The Tri-Wheel is a novel wheel-leg locomotion concept inspired by work with first responders. Through its two modes of operation-Driving Mode and Tumbling Mode- this mechanism is able to both drive quickly on smooth surfaces at roughly 1.7 times desired speed and climb objects as tall as 67% of the diameter of the mechanism. The unique gearing configuration that facilitates these dual capabilities is described, and testing quantifies that this nonprecision gearing system is roughly 81% efficient in a worst-case scenario of loading. This work introduces the Tri-Wheel concept and provides preliminary testing to validate its predicted operating characteristics.
RPL under mobility
This paper focuses on routing for vehicles getting access to infrastructure either directly or via multiple hops though other vehicles. We study Routing Protocol for Low power and lossy networks (RPL), a tree-based routing protocol designed for sensor networks. Many design elements from RPL are transferable to the vehicular environment. We provide a simulation performance study of RPL and RPL tuning in VANETs. More specifically, we seek to study the impact of RPL's various parameters and external factors (e.g., various timers and speeds) on its performance and obtain insights on RPL tuning for its use in VANETs.
Coda: Postnarrativist Philosophy of Historiography
The perception that it is necessary to choose between a nihilistic ‘anything goes’ postmodernism and an absolutist objectivism has bewitched much of the contemporary philosophical discussion on historiography and beyond. This book has tried to show the way in which historiography, and specifically its main cognitive products, can be evaluated and ranked rationally, but without a commitment to the correspondence theory of truth. Postnarrativist philosophy of historiography endorses the initial insight of narrativism that the texts and entire books of history are the main knowledge contributions of historiography and must be the subjects of philosophical analysis, but it understands them as exemplifying historiographical reasoning for theses of history.
Enhancing Reliability of Workflow Execution Using Task Replication and Spot Instances
Cloud environments offer low-cost computing resources as a subscription-based service. These resources are elastically scalable and dynamically provisioned. Furthermore, cloud providers have also pioneered new pricing models like spot instances that are cost-effective. As a result, scientific workflows are increasingly adopting cloud computing. However, spot instances are terminated when the market price exceeds the users bid price. Likewise, cloud is not a utopian environment. Failures are inevitable in such large complex distributed systems. It is also well studied that cloud resources experience fluctuations in the delivered performance. These challenges make fault tolerance an important criterion in workflow scheduling. This article presents an adaptive, just-in-time scheduling algorithm for scientific workflows. This algorithm judiciously uses both spot and on-demand instances to reduce cost and provide fault tolerance. The proposed scheduling algorithm also consolidates resources to further minimize execution time and cost. Extensive simulations show that the proposed heuristics are fault tolerant and are effective, especially under short deadlines, providing robust schedules with minimal makespan and cost.
Recasting Residual-based Local Descriptors as Convolutional Neural Networks: an Application to Image Forgery Detection
Local descriptors based on the image noise residual have proven extremely effective for a number of forensic applications, like forgery detection and localization. Nonetheless, motivated by promising results in computer vision, the focus of the research community is now shifting on deep learning. In this paper we show that a class of residual-based descriptors can be actually regarded as a simple constrained convolutional neural network (CNN). Then, by relaxing the constraints, and fine-tuning the net on a relatively small training set, we obtain a significant performance improvement with respect to the conventional detector.
Leonhard Euler and the Theory of Ships
On April 15, 2007 the scientific world has commemorated Leonhard Euler's 300 th birthday. Euler's eminent work has become famous in many fields: Mathematics, mechanics, optics, acoustics, astronomy and geodesy, even in the theory of music. This article will recall his no less distinguished contributions to the founding of the modern theory of ships. These are not so widely known to the general professional public. In laying these foundations in ship theory like in other fields Euler was seeking " first principles, generality, order and above all clarity ". This article will highlight those achievements for which we owe him our gratitude. There is no doubt that Leonhard Euler was one of the founders of the modern theory of ships. He raised many fundamental questions for the first time and through all phases of his professional lifetime devoted himself to subjects of ship theory. Thereby he gave a unique profile to this still nascent scientific discipline. Many of his approaches have been of lasting, incisive influence on the structure of this field. Some of his ideas have become so much a matter of routine today that we have forgotten their descent from Euler. This article will synoptically review Euler's contributions to the foundation of this discipline, will correlate them with the stages of Euler's own scientific development, embedded in the rich environment of scientific enlightenment in the 18th c., and will appreciate the value of his lasting aftereffects until today. The same example will serve to recognize the fertile field of tension always existing between Euler's fundamental orientation and his desire to make contributions to practical applications, which has remained characteristic of ship theory to the present day. Without claiming completeness in detail this article aims at giving a coherent overview of Euler's approaches and objectives in this discipline. This synopsis will be presented primarily from the viewpoint of engineering science in its current stage of development.
Feedback on household electricity consumption : a tool for saving energy ?
Abstract Improved feedback on electricity consumption may provide a tool for customers to better control their consumption and ultimately save energy. This paper asks which kind of feedback is most successful. For this purpose, a psychological model is presented that illustrates how and why feedback works. Relevant features of feedback are identified that may determine its effectiveness: frequency, duration, content, breakdown, medium and way of presentation, comparisons, and combination with other instruments. The paper continues with an analysis of international experience in order to find empirical evidence for which kinds of feedback work best. In spite of considerable data restraints and research gaps, there is some indication that the most successful feedback combines the following features: it is given frequently and over a long time, provides an appliance-specific breakdown, is presented in a clear and appealing way, and uses computerized and interactive tools.
Knowledge management in small and medium-sized companies: knowledge management for entrepreneurs
Society has changed drastically over the last few years. But this is nothing new, or so it appears. Societies are always changing, just as people are always changing. And seeing as it is the people who form the societies, a constantly changing society is only natural. However something more seems to have happened over the last few years. Without wanting to frighten off the reader straight away, we can point to a diversity of social developments that indicate that the changes seem to be following each other faster, especially over the last few decades. We can for instance, point to the pluralisation (or a growing versatility), differentialisation and specialisation of society as a whole. On a more personal note, we see the diversification of communities, an emphasis on emancipation, individualisation and post-materialism and an increasing wish to live one's life as one wishes, free from social, religious or ideological contexts.
Improving the Fisher Kernel for Large-Scale Image Classification
The Fisher kernel (FK) is a generic framework which combines the benefits of generative and discriminative approaches. In the context of image classification the FK was shown to extend the popular bag-of-visual-words (BOV) by going beyond count statistics. However, in practice, this enriched representation has not yet shown its superiority over the BOV. In the first part we show that with several well-motivated modifications over the original framework we can boost the accuracy of the FK. On PASCAL VOC 2007 we increase the Average Precision (AP) from 47.9% to 58.3%. Similarly, we demonstrate state-of-the-art accuracy on CalTech 256. A major advantage is that these results are obtained using only SIFT descriptors and costless linear classifiers. Equipped with this representation, we can now explore image classification on a larger scale. In the second part, as an application, we compare two abundant resources of labeled images to learn classifiers: ImageNet and Flickr groups. In an evaluation involving hundreds of thousands of training images we show that classifiers learned on Flickr groups perform surprisingly well (although they were not intended for this purpose) and that they can complement classifiers learned on more carefully annotated datasets.
Rasch analysis of the Patient Rated Elbow Evaluation questionnaire
BACKGROUND The Patient Rated Elbow Evaluation (PREE) was developed as an elbow joint specific measure of pain and disability and validated with classical psychometric methods. More recently, Rasch analysis has contributed new methods for analyzing the clinical measurement properties of self-report outcome measures. The objective of the study was to determine aspects of validity of the PREE using the Rasch model to assess the overall fit of the PREE data, the response scaling, individual item fit, differential item functioning (DIF), local dependency, unidimensionality and person separation index (PSI). METHODS A convenience sample of 236 patients (Age range 21-79 years; M: F- 97:139) with elbow disorders were recruited from the Roth│McFarlane Hand and Upper Limb Centre, London, Ontario, Canada. The baseline scores of the PREE were used. Rasch analysis was conducted using RUMM 2030 software on the 3 sub scales of the PREE separately. RESULTS The 3 sub scales showed misfit initially with disordered thresholds on17 out of 20 items), uniform DIF was observed for two items ("Carrying a 10lbs object" from specific activities subscale for age group; and "household work" from the usual activities subscale for gender); multidimensionality and local dependency. The Pain subscale satisfied Rasch expectations when item 2 "Pain - At rest" was split for age group, while the usual activities subscale readily stood up to Rasch requirements when the item 2 "household work" was split for gender. The specific activities subscale demonstrated fit to the Rasch model when sub test analysis accounted for local dependency. All three subscales of the PREE were well targeted and had high reliability (PSI >0.80). CONCLUSION The three subscales of the PREE appear to be robust when tested against the Rasch model when subject to a few alterations. The value of changing the 0-10 format is questionable given its widespread use; further Rasch-based analysis of whether these findings are stable in other samples is warranted.
Attosecond physics at the nanoscale.
Recently two emerging areas of research, attosecond and nanoscale physics, have started to come together. Attosecond physics deals with phenomena occurring when ultrashort laser pulses, with duration on the femto- and sub-femtosecond time scales, interact with atoms, molecules or solids. The laser-induced electron dynamics occurs natively on a timescale down to a few hundred or even tens of attoseconds (1 attosecond  =  1 as  =  10-18 s), which is comparable with the optical field. For comparison, the revolution of an electron on a 1s orbital of a hydrogen atom is  ∼152 as. On the other hand, the second branch involves the manipulation and engineering of mesoscopic systems, such as solids, metals and dielectrics, with nanometric precision. Although nano-engineering is a vast and well-established research field on its own, the merger with intense laser physics is relatively recent. In this report on progress we present a comprehensive experimental and theoretical overview of physics that takes place when short and intense laser pulses interact with nanosystems, such as metallic and dielectric nanostructures. In particular we elucidate how the spatially inhomogeneous laser induced fields at a nanometer scale modify the laser-driven electron dynamics. Consequently, this has important impact on pivotal processes such as above-threshold ionization and high-order harmonic generation. The deep understanding of the coupled dynamics between these spatially inhomogeneous fields and matter configures a promising way to new avenues of research and applications. Thanks to the maturity that attosecond physics has reached, together with the tremendous advance in material engineering and manipulation techniques, the age of atto-nanophysics has begun, but it is in the initial stage. We present thus some of the open questions, challenges and prospects for experimental confirmation of theoretical predictions, as well as experiments aimed at characterizing the induced fields and the unique electron dynamics initiated by them with high temporal and spatial resolution.
Clozapine's Effect on Recidivism Among Offenders with Mental Disorders.
Mental disorder is associated with criminal reoffending, especially violent acts of offending. Features of mental disorder, psychosocial stresses, substance use disorder, and personality disorder combine to increase the risk of criminal recidivism. Clozapine, an atypical antipsychotic, is indicated in the treatment of patients with psychotic disorders. This article is the report of a community follow-up study of a matched control of those treated with clozapine (n = 41) and those treated with other antipsychotics (n = 21). Rates of reoffending behavior in the general, nonviolent, violent, and sexual categories were calculated after two years of follow-up. Although not statistically significant, the two-year criminal conviction rates of those treated with other antipsychotics in all offense categories except sexual reoffending were two-fold higher than in those treated with clozapine. The time from release to the first offense and crime-free time in the community were significantly longer in the clozapine group. By prolonging the time it takes from release to first offense, clozapine confers additional crime-reduction advantages.
Key success factors for implementing Business Intelligence in South African public sector organisations
Business Intelligence (BI) has been rated as a key application and technology investment which provides organisations with great value by improving their decision making processes. The public sector provides a case for implementing BI for improved decision making processes as a way of enhancing its service delivery. However, the implementation of BI in these organisations have revealed to be quite a complex task to undertake. This research paper sets out to explore the implementation of BI in the public sector in South Africa. The research was conducted through two case studies and data was collected by conducting semi-structured interviews and document collection with organisations that are implementing BI. A qualitative thematic analysis method was then used to construct the major themes that emerged from the data. The study revealed that BI can be used as an enabler of change and improvement in public sector activities. Consolidating structures, systems and processes was identified as a precursor to implementing it, while the use of the BI initiative to support organisational strategic objectives was seen as ensuring executive buy-in. However, the level of skills needed to use BI tools was highlighted as key factor in hindering its use in these organisations. ICT has furthermore more been identified as being an important factor for the promotion of development and equitable access to public services.
Pharmacokinetics and antiviral activity of PHX1766, a novel HCV protease inhibitor, using an accelerated Phase I study design.
BACKGROUND PHX1766 is a novel HCV NS3/4 protease inhibitor with robust potency and high selectivity in replicon studies (50% maximal effective concentration 8 nM). Two clinical trials investigated the safety, tolerability, pharmacokinetics and antiviral activity of PHX1766 in healthy volunteers (HV) and chronic hepatitis C patients, by use of a dose-adaptive overlapping clinical trial design. METHODS Two randomized, double-blind, placebo-controlled clinical trials were conducted. Single doses of PHX1766 or placebo were administered to 25 HV and six HCV genotype 1-infected patients (50 mg once daily -1,000 mg once daily, 250 mg twice daily and 100 mg of a new formulation of PHX1766 once daily). Multiple doses of PHX1766 or placebo were administered to 32 HV and seven HCV genotype 1-infected patients (50 mg once daily -800 mg twice daily). RESULTS Oral administration of PHX1766 was safe and well tolerated at all dose levels with rapid absorption (time at which concentration maximum is reached of 1-4 h) and with mean terminal half-lives of 4-23 h. Multiple doses of PHX1766 800 mg twice daily in HCV patients produced an area under the plasma concentration-time curve from time of drug administration to the last time point with a measurable concentration after dosing accumulation ratio of 2.3. The mean maximal observed HCV RNA decline was 0.6 log(10) IU/ml in the first 24 h in the single-dose protocol and 1.5 log(10) IU/ml after 6 days of PHX1766 dosing. CONCLUSIONS An overlapping, dose-adaptive single-dose and multiple-dose escalating design in HV and HCV-infected patients proved to be highly efficient in identifying a therapeutic dose. Although in vitro replicon studies indicated a robust HCV RNA viral decline of PHX1766, the study in HCV patients demonstrated only modest viral load reduction.
Blur detection for digital images using wavelet transform
With the prevalence of digital cameras, the number of digital images increases quickly, which raises the demand for image quality assessment in terms of blur. Based on the edge type and sharpness analysis, using the Harr wavelet transform, a new blur detection scheme is proposed in this paper, which can determine whether an image is blurred or not and to what extent an image is blurred. Experimental results demonstrate the effectiveness of the proposed scheme.
Online Large-Margin Training of Dependency Parsers
We present an effective training algorithm for linearly-scored dependency parsers that implements online largemargin multi-class training (Crammer and Singer, 2003; Crammer et al., 2003) on top of efficient parsing techniques for dependency trees (Eisner, 1996). The trained parsers achieve a competitive dependency accuracy for both English and Czech with no language specific enhancements.
A New Heuristic Reduct Algorithm Base on Rough Sets Theory
Real world data sets usually have many features, which increases the complexity of data mining task. Feature selection, as a preprocessing step to the data mining, has been shown very effective in reducing dimensionality, removing irrelevant data, increasing learning accuracy, and improving comprehensibility. To find the optimal feature subsets is the aim of feature selection. Rough sets theory provides a mathematical approach to find optimal feature subset, but this approach is time consuming. In this paper, we propose a novel heuristic algorithm based on rough sets theory to find out the feature subset. This algorithm employs appearing frequency of attribute as heuristic information. Experiment results show in most times our algorithm can find out optimal feature subset quickly and efficiently.
Wideband Microstrip Patch Antenna With U-Shaped Parasitic Elements
A wideband U-shaped parasitic patch antenna is proposed. Two parasitic elements are incorporated into the radiating edges of a rectangular patch whose length and width are lambdag/2 and lambdag/4, respectively, in order to achieve wide bandwidth with relatively small size. Coupling between the main patch and U-shaped parasitic patches is realized by either horizontal or vertical gaps. These gaps are found to be the main factors of the wideband impedance matching. The proposed antenna is designed and fabricated on a small size ground plane (25 mmtimes30 mm) for application of compact transceivers. The fabricated antenna on a FR4 substrate shows an impedance bandwidth of 27.3% (1.5 GHz) at 5.5 GHz center frequency. The measured radiation patterns are similar to those of a conventional patch antenna with slightly higher gains of 6.4 dB and 5.2 dB at each resonant frequency
Discriminative Multi-instance Multitask Learning for 3D Action Recognition
As the prosperity of low-cost and easy-operating depth cameras, skeleton-based human action recognition has been extensively studied recently. However, most of the existing methods partially consider that all 3D joints of a human skeleton are identical. Actually, these 3D joints exhibit diverse responses to different action classes, and some joint configurations are more discriminative to distinguish a certain action. In this paper, we propose a discriminative multi-instance multitask learning (MIMTL) framework to discover the intrinsic relationship between joint configurations and action classes. First, a set of discriminative and informative joint configurations for the corresponding action class is captured in multi-instance learning model by regarding the action and the joint configurations as a bag and its instances, respectively. Then, a multitask learning model with group structure constraints is exploited to further reveal the intrinsic relationship between the joint configurations and different action classes. We conduct extensive evaluations of MIMTL using three benchmark 3D action recognition datasets. Experimental results show that our proposed MIMTL framework performs favorably compared with several state-of-the-art approaches.
Rules and Patterns for Website Orchestration
Information-intensive websites are constantly changing. It concerns the enhancement as well as the maintenance of existing content, also layout changes are common practice. Changes often imply the need to reorganise some content parts to keep an adequate representation. At present, such changes will be performed statically, changing the rules of presentation regarding each content object. However, it is possible to generalise the content presentation in the way that layout definitions and restrictions determine the representation of content objects or their types. Therefore, this paper proposes dynamic placement and representation of content objects to avoid manual customisations and grave errors in layout development. The approach is based on patterns that allow to change and adapt the layout as well as help to reuse components and concepts.
Differentiating ambiguity and ambiguity attitude
The objective of this paper is to show how ambiguity, and a decision maker (DM)’s response to it, can be modelled formally in the context of a very general decision model. We introduce relation derived from the DM’s preferences, called “unambiguous preference”, and show that it can be represented by a set of probability measures. We provide such set with a simple differential characterization, and argue that it represents the DM’s perception of the “ambiguity” present in the decision problem. Given the notion of ambiguity, we show that preferences can be represented so as to provide an intuitive representation of ambiguity attitudes. We then argue that these ideas can be applied, e.g., to obtain a behavioral foundation for the “generalized Bayesian” updating rule and for the “α-maxmin” expected utility model.
Exploring the Application of Blockchain Technology to Combat the Effects of Social Loafing in Cross Functional Group Projects
Today, many multi-national organisations operate in a dispersed geographical environment. Teams consisting of members from around the globe can be assembled on an as-needed basis. However, this can prove to be a complex managerial task. Individuals, who believe that their efforts are not being effectively monitored by upper management, lose their motivation to fully contribute to the best of their abilities as they do not believe there is any correlation between the effort they exert and the reward they receive. With low levels of intrinsic involvement among employees, a lack of task visibility from upper management and limited social interaction among group members, many organisations struggle to combat the issue of social loafing in cross functional working groups. Blockchain technology, widely acknowledged as enabling openness, can facilitate the development of an immutable, transparent, secure and verifiable application for capturing individuals Intellectual Property as they work. This would motivate employees to more openly contribute to group work, safe in the knowledge that their contribution will be recognised, enabling management to maintain a high level of task visibility over their employees work without requiring their physical presence.
Changed nursing scheduling for improved safety culture and working conditions - patients' and nurses' perspectives.
AIM To evaluate fixed scheduling compared with self-scheduling for nursing staff in oncological inpatient care with regard to patient and staff outcomes. BACKGROUND Various scheduling models have been tested to attract and retain nursing staff. Little is known about how these schedules affect staff and patients. Fixed scheduling and self-scheduling have been studied to a small extent, solely from a staff perspective. METHOD We implemented fixed scheduling on two of four oncological inpatient wards. Two wards kept self-scheduling. Through a quasi-experimental design, baseline and follow-up measurements were collected among staff and patients. The Safety Attitudes Questionnaire was used among staff, as well as study-specific questions for patients and staff. RESULTS Fixed scheduling was associated with less overtime and fewer possibilities to change shifts. Self-scheduling was associated with more requests from management for short notice shift changes. The type of scheduling did not affect patient-reported outcomes. CONCLUSIONS Fixed scheduling should be considered in order to lower overtime. Further research is necessary and should explore patient outcomes to a greater extent. IMPLICATIONS FOR NURSING MANAGEMENT Scheduling is a core task for nurse managers. Our study suggests fixed scheduling as a strategy for managers to improve the effective use of resources and safety.
Mixed Methods Research : A Research Paradigm Whose Time Has Come
14 The purposes of this article are to position mixed methods research (mixed research is a synonym) as the natural complement to traditional qualitative and quantitative research, to present pragmatism as offering an attractive philosophical partner for mixed methods research, and to provide a framework for designing and conducting mixed methods research. In doing this, we briefly review the paradigm “wars” and incompatibility thesis, we show some commonalities between quantitative and qualitative research, we explain the tenets of pragmatism, we explain the fundamental principle of mixed research and how to apply it, we provide specific sets of designs for the two major types of mixed methods research (mixed-model designs and mixed-method designs), and, finally, we explain mixed methods research as following (recursively) an eight-step process. A key feature of mixed methods research is its methodological pluralism or eclecticism, which frequently results in superior research (compared to monomethod research). Mixed methods research will be successful as more investigators study and help advance its concepts and as they regularly practice it.
A method for studying the generalized slowing hypothesis in children with specific language impairment.
The present work was conducted to demonstrate a method that could be used to assess the hypothesis that children with specific language impairment (SLI) often respond more slowly than unimpaired children on a range of tasks. The data consisted of 22 pairs of mean response times (RTs) obtained from previously published studies; each pair consisted of a mean RT for a group of children with SLI for an experimental condition and the corresponding mean RT for a group of children without SLI. If children with SLI always respond more slowly than unimpaired children and by an amount that does not vary across tasks, then RTs for children with SLI should increase linearly as a function of RTs for age-matched control children without SLI. This result was obtained and is consistent with the view that differences in processing speed between children with and without SLI reflect some general (i.e., non-task specific) component of cognitive processing. Future applications of the method are suggested.
A Proposed Digital Forensics Business Model to Support Cybercrime Investigation in Indonesia
Digital forensics will always include at least human as the one who performs activities, digital evidence as the main object, and process as a reference for the activities followed. The existing framework has not provided a description of the interaction between human, interaction between human and digital evidence, as well as interaction between human and the process itself. A business model approach can be done to provide the idea regarding the interaction in question. In this case, what has been generated by the author in the previous study through a business model of the digital chain of custody becomes the first step in constructing a business model of a digital forensics. In principle, the proposed business model already accommodates major components of digital forensics (human, digital evidence, process) and also considers the interactions among the components. The business model suggested has contained several basic principles as described in The Regulation of Chief of Indonesian National Police (Perkap) No 10/2010. This will give support to law enforcement to deal with cybercrime cases that are more frequent and more sophisticated, and can be a reference for each institution and organization to implement digital forensics activities.
BOEMIE: Reasoning-based Information Extraction
This paper presents a novel approach for exploiting an ontology in an ontology-based information extraction system, which substitutes part of the extraction process with reasoning, guided by a set of automatically acquired rules.
Introducing The Neuroscience Gateway
Last few decades have seen the emergence of computational neuroscience as a mature field where researchers are interested in modeling complex and large neuronal systems and require access to high performance computing machines and associated cyberinfrastructure to manage computational workflow and data. The neuronal simulation tools, used in this research field, are also implemented for parallel computers and suitable for high performance computing machines. But using these tools on complex high performance computing machines remain a challenge due to issues with acquiring computer time on these machines located at national supercomputer centers, dealing with complex user interface of these machines, dealing with data management and retrieval. The Neuroscience Gateway is being developed to alleviate all of these barriers to entry for computational neuroscientist. It hides or eliminates, from the point of view of the users, all the administrative and technical barriers and makes parallel neuronal simulation tools easily available and accessible on complex high performance computing machines and handles the running of jobs and data management and retrieval. This paper describes the architecture it is based on, how it is implemented, and how users can use this for computational neuroscience research using high performance computing at the back end. Keywords—computational neuroscience, science gateway, high performance computing
A preliminary framework for description, analysis and comparison of creative systems
I summarise and attempt to clarify some concepts presented in and arising from Margaret Boden’s (1990) descriptive hierarchy of creativity, by beginning to formalise the ideas she proposes. The aim is to move towards a model which allows detailed comparison, and hence better understanding, of systems which exhibit behaviour which would be called ‘‘creative’’ in humans. The work paves the way for the description of naturalistic, multi-agent creative AI systems, which create in a societal context. I demonstrate some simple reasoning about creative behaviour based on the new framework, to show how it might be useful for the analysis and study of creative systems. In particular, I identify some crucial properties of creative systems, in terms of the framework components, some of which may usefully be proven a priori of a given system. I suggest that Boden’s descriptive framework, once elaborated in detail, is more uniform and more powerful than it first appears. 2006 Elsevier B.V. All rights reserved.
Facebook wall posts: a model of user behaviors
How do people interact with their Facebook wall? At a high level, this question captures the essence of our work. While most prior efforts focus on Twitter, the much fewer Facebook studies focus on the friendship graph or are limited by the amount of users or the duration of the study. In this work, we model Facebook user behavior: we analyze the wall activities of users focusing on identifying common patterns and surprising phenomena. We conduct an extensive study of roughly 7k users over 3 years during 4-month intervals each year. We propose PowerWall, a lesser known heavy-tailed distribution to fit our data. Our key results can be summarized in the following points. First, we find that many wall activities, including number of posts, number of likes, number of posts of type photo, can be described by the PowerWall distribution. What is more surprising is that most of these distributions have similar slope, with a value close to 1! Second, we show how our patterns and metrics can help us spot surprising behaviors and anomalies. For example, we find a user posting every two days, exactly the same count of posts; another user posting at midnight, with no other activity before or after. Our work provides a solid step toward a systematic and quantitative wall-centric profiling of Facebook user activity.
Design and simulation of a microfluidic device for acoustic cell separation.
Experimental acoustic cell separation methods have been widely used to perform separation for different types of blood cells. However, numerical simulation of acoustic cell separation has not gained enough attention and needs further investigation since by using numerical methods, it is possible to optimize different parameters involved in the design of an acoustic device and calculate particle trajectories in a simple and low cost manner before spending time and effort for fabricating these devices. In this study, we present a comprehensive finite element-based simulation of acoustic separation of platelets, red blood cells and white blood cells, using standing surface acoustic waves (SSAWs). A microfluidic channel with three inlets, including the middle inlet for sheath flow and two symmetrical tilted angle inlets for the cells were used to drive the cells through the channel. Two interdigital transducers were also considered in this device and by implementing an alternating voltage to the transducers, an acoustic field was created which can exert the acoustic radiation force to the cells. Since this force is dependent to the size of the cells, the cells are pushed towards the midline of the channel with different path lines. Particle trajectories for different cells were obtained and compared with a theoretical equation. Two types of separations were observed as a result of varying the amplitude of the acoustic field. In the first mode of separation, white blood cells were sorted out through the middle outlet and in the second mode of separation, platelets were sorted out through the side outlets. Depending on the clinical needs and by using the studied microfluidic device, each of these modes can be applied to separate the desired cells.
Improved Texture Networks: Maximizing Quality and Diversity in Feed-Forward Stylization and Texture Synthesis
The recent work of Gatys et al., who characterized the style of an image by the statistics of convolutional neural network filters, ignited a renewed interest in the texture generation and image stylization problems. While their image generation technique uses a slow optimization process, recently several authors have proposed to learn generator neural networks that can produce similar outputs in one quick forward pass. While generator networks are promising, they are still inferior in visual quality and diversity compared to generation-by-optimization. In this work, we advance them in two significant ways. First, we introduce an instance normalization module to replace batch normalization with significant improvements to the quality of image stylization. Second, we improve diversity by introducing a new learning formulation that encourages generators to sample unbiasedly from the Julesz texture ensemble, which is the equivalence class of all images characterized by certain filter responses. Together, these two improvements take feed forward texture synthesis and image stylization much closer to the quality of generation-via-optimization, while retaining the speed advantage.
Change Detection in Time Series Data Using Wavelet Footprints
Detecting changes in time series data is an important data analysis task with application in various scientific domains. In this paper, we propose a novel approach to address the problem of change detection in time series data, which can find both the amplitude and degree of changes. Our approach is based on wavelet footprints proposed originally by the signal processing community for signal compression. We, however, exploit the properties of footprints to efficiently capture discontinuities in a signal. We show that transforming time series data using footprint basis up to degree D generates nonzero coefficients only at the change points with degree up to D. Exploiting this property, we propose a novel change detection query processing scheme which employs footprint-transformed data to identify change points, their amplitudes, and degrees of change efficiently and accurately. We also present two methods for exact and approximate transformation of data. Our analytical and empirical results with both synthetic and real-world data show that our approach outperforms the best known change detection approach in terms of both performance and accuracy. Furthermore, unlike the state of the art approaches, our query response time is independent from the number of change points in the data and the user-defined change threshold.
Feature-Fused SSD: Fast Detection for Small Objects
Small objects detection is a challenging task in computer vision due to its limited resolution and information. In order to solve this problem, the majority of existing methods sacrifice speed for improvement in accuracy. In this paper, we aim to detect small objects at a fast speed, using the best object detector Single Shot Multibox Detector (SSD) with respect to accuracy-vs-speed trade-off as base architecture. We propose a multi-level feature fusion method for introducing contextual information in SSD, in order to improve the accuracy for small objects. In detailed fusion operation, we design two feature fusion modules, concatenation module and element-sum module, different in the way of adding contextual information. Experimental results show that these two fusion modules obtain higher mAP on PASCAL VOC2007 than baseline SSD by 1.6 and 1.7 points respectively, especially with 2-3 points improvement on some small objects categories. The testing speed of them is 43 and 40 FPS respectively, superior to the state of the art Deconvolutional single shot detector (DSSD) by 29.4 and 26.4 FPS.
Inter-observer agreement on the interpretation of capsule endoscopy findings based on capsule endoscopy structured terminology: a multicenter study by the Korean Gut Image Study Group.
OBJECTIVE Capsule endoscopy (CE) is a novel investigation for the diagnosis of small-bowel disease but its interpretation is highly subjective. We studied the inter-observer agreement and accuracy of the interpretation of CE findings based on capsule endoscopy structured terminology (CEST). MATERIAL AND METHODS Fifty-six CE video clips were collected from eight university hospitals in South Korea and were independently reviewed by 13 gastroenterology experts and 10 trainees. All investigators recorded their findings based on CEST. To determine the accuracy of individual viewers, we defined the 'gold standard' as a joint review by four experts. RESULTS The 56 CE video clips included five normal cases, 19 cases of protruding lesions, 21 cases of depressed lesions, three cases of flat lesions, one case of abnormal mucosa, six cases with blood in the lumen, and one case of stenotic lumen. The overall mean accuracies for the experts and trainees were 74.3% +/- 22.6% and 61.7% +/- 25.4%, respectively. The overall accuracy for the trainee group was significantly lower than that for the expert group (P < 0.001), especially in normal, tumor, venous structure, and ulcer cases. The accuracies of the two groups varied with the CE findings. The accuracies were higher in cases with more prominent intraluminal changes (e.g. active small-bowel bleeding, ulcer, tumor, stenotic lumen). In contrast, subtle mucosal lesions, such as erosion, angioectasia, and diverticulum, had lower accuracies. The mean kappa values for the experts and trainees were 0.61 (range 0.39-0.97) and 0.46 (range 0.17-0.66), respectively. CONCLUSIONS Our results showed that there was substantial agreement between experts and moderate agreement between trainees. In order to achieve higher accuracies and better inter-observer agreement, we need not only more experience with CE but also consensus regarding CEST terminology.
Phase I and Pharmacokinetic Study of Hepsulfam ( NSC 329680 ) 1
Hepsulfam (NSC 329680), a bifunctional alkylating agent structurally related to busulfan, has entered clinical trial based on its broader preclinical antitumor activity compared with that of busulfan and its i.v. for mulation which may circumvent the many problems arising from the p.o. administration of busulfan, such as significant individual differences in bioavailability. In this Phase I study, 53 patients received 95 courses of hepsulfam at doses ranging from 30 to 480 mg/m2 administered i.v. over 30 min every 28 days. Hematological toxicity was dose limiting. Leukopenia and thrombocytopenia were dose related, delayed in onset, and sustained for long durations. Toxicity was cumulative in most patients receiving more than one course. This pattern of myelosuppression sug gests that hepsulfam is cytotoxic to hematopoietic stem cells. Although hematological toxicity was not particularly severe during most courses, its lengthy duration precluded the prompt administration of subsequent courses. Minimal nonhematological effects were observed. Pharmacoki netic studies revealed that the clearance rate of hepsulfam is linear over the dose range studied and that its plasma disposition is biphasic with mean a and H half-lives of 19 ±18 (SE) min and 337 ±248 (SE) min, respectively. The area under the plasma clearance curve correlated with the percentage of change in WBC using a sigmoidal /:'„,„ model and with the duration of thrombocytopenia in patients with hematological toxicity. Based on the protracted duration of the toxicity of multiple doses that were >210 mg/m2, the recommended starting dose for Phase II trials is 210 mg/m2. However, these trials should be pursued with caution because of the protracted nature of hepsulfam's myelosuppression. Because hep sulfam produced minimal nonhematological toxicity, substantial dose escalation above 480 mg/m2 may be possible with hematopoietic stem
Offline Handwriting Recognition with Multidimensional Recurrent Neural Networks
Offline handwriting recognition—the automatic transcription of images of handwritten text—is a challenging task that combines computer vision with sequence learning. In most systems the two elements are handled separately, with sophisticated preprocessing techniques used to extract the image features and sequential models such as HMMs used to provide the transcriptions. By combining two recent innovations in neural networks—multidimensional recurrent neural networks and connectionist temporal classification—this paper introduces a globally trained offline handwriting recogniser that takes raw pixel data as input. Unlike competing systems, it does not require any alphabet specific preprocessing, and can therefore be used unchanged for any language. Evidence of its generality and power is provided by data from a recent international Arabic recognition competition, where it outperformed all entries (91.4% accuracy compared to 87.2% for the competition winner) despite the fact that neither author understands a word of Arabic.
Workflow Scheduling in Multi-Tenant Cloud Computing Environments
Multi-tenancy is one of the key features of cloud computing, which provides scalability and economic benefits to the end-users and service providers by sharing the same cloud platform and its underlying infrastructure with the isolation of shared network and compute resources. However, resource management in the context of multi-tenant cloud computing is becoming one of the most complex task due to the inherent heterogeneity and resource isolation. This paper proposes a novel cloud-based workflow scheduling (CWSA) policy for compute-intensive workflow applications in multi-tenant cloud computing environments, which helps minimize the overall workflow completion time, tardiness, cost of execution of the workflows, and utilize idle resources of cloud effectively. The proposed algorithm is compared with the state-of-the-art algorithms, i.e., First Come First Served (FCFS), EASY Backfilling, and Minimum Completion Time (MCT) scheduling policies to evaluate the performance. Further, a proof-of-concept experiment of real-world scientific workflow applications is performed to demonstrate the scalability of the CWSA, which verifies the effectiveness of the proposed solution. The simulation results show that the proposed scheduling policy improves the workflow performance and outperforms the aforementioned alternative scheduling policies under typical deployment scenarios.
Experimental characterization and analysis of the BITalino platforms against a reference device
Low-cost hardware platforms for biomedical engineering are becoming increasingly available, which empower the research community in the development of new projects in a wide range of areas related with physiological data acquisition. Building upon previous work by our group, this work compares the quality of the data acquired by means of two different versions of the multimodal physiological computing platform BITalino, with a device that can be considered a reference. We acquired data from 5 sensors, namely Accelerometry (ACC), Electrocardiography (ECG), Electroencephalography (EEG), Electrodermal Activity (EDA) and Electromyography (EMG). Experimental evaluation shows that ACC, ECG and EDA data are highly correlated with the reference in what concerns the raw waveforms. When compared by means of their commonly used features, EEG and EMG data are also quite similar across the different devices.
Word-Formation in English
published by the press syndicate of the university of cambridge A catalogue record for this book is available from the British Library Library of Congress Cataloguing in Publication data Plag, Ingo. Word-formation in English / Ingo Plag. p. cm. – (Cambridge textbooks in linguistics) Includes bibliographical references (p.) and indexes. Contents Preface page xi Abbreviations and notational conventions xiii Introduction 1 1 Basic concepts 4 1.1 What is a word? 4 1.2 Studying word-formation 9 1.3 Inflection and derivation 14 1.4 Summary 17 Further reading 18 Exercises 18 2 Studying complex words 20 2.1 Identifying morphemes 20 2.1.1 The morpheme as the minimal linguistic sign 20 2.1.2 Problems with the morpheme: the mapping of form and meaning 22 2.2 Allomorphy 27 2.3 Establishing word-formation rules 30 2.4 Multiple affixation 38 2.5 Summary 41 Further reading 41 Exercises 41
Real-time Stereo Visual SLAM
Simultaneous localisation and mapping (SLAM) has been the focus of intensive research in the last decade due to the potential benefits it offers to the field of autonomous mobile robotics. SLAM is concerned with the ability of an autonomous vehicle to navigate through an unexplored environment and incrementally construct a map of the environment and localise itself within this map. This thesis describes an entirely vision-based, large-area, 6DoF SLAM system that was developed specifically for real-time deployment on an autonomous underwater vehicle (AUV) equipped with a calibrated stereo system. This SLAM system is based on the extended Kalman filter (EKF) and incorporates a novel approach to landmark description and data association in which landmarks are essentially local submaps that consist of a cloud of 3D points and their associated SIFT or SURF descriptors. Furthermore, landmarks are sparsely distributed in the constructed map which greatly simplifies and accelerates data association and map updates. In addition to performing localisation based on landmark observations the system also performs visual odometry and predicts vehicle motion using a constant-velocity model. For a simulated 87m long 3D loop trajectory the mean squared localisation error of the system was 3.16 and the maximum absolute error in roll, pitch and yaw angles was 11.6, 24.3 and 24.4 respectively when the stereo and landmark correspondences contained Gaussian noise with a standard deviation of 0.1 pixels and 10% of correspondences were outliers. This thesis represents an important contribution to entirely vision-based 6DoF SLAM as very few implementations currently exist, and the approach utilised in this thesis achieves comparable results and has the potential to operate in real-time. I have yet to see any problem, however complicated, which, when you looked at it the right way, did not become still more complicated. . . .
Knowledge Representation in pAninI Framework Using Neural Network Model
Knowledge representation is base for expressing semantic content of input in intelligent information retrieval systems. Identification of semantic requires processing of input language at various levels. To make system understand text or speech is a challenging task as it involves extracting semantics of the language which itself is a complex problem. At the same time languages posses with multiple ambiguities and uncertainty which needs to be resolved at various phases of language processing. Level of understandability depends upon the grammar, syntactic and semantic representation of the language and methods employed for these analysis. Processing depends on the type of language, grammar of the language, ambiguities present and size of corpus available. Order free language posses different features as compared to rigid order language. Most of the Indian languages are order free; hence mechanism for such language needs to be formulated. One of the ancient Indian Sanskrit grammarians, pAninI has defined grammar of Sanskrit language in such a way that it is suitable for computational analysis. Six main semantic class identified under this theory is a baseline model for knowledge representation. This paper exploits the features of the language, applicability of rules and resolving ambiguities using neural network model. A hybrid model incorporating the features of rules based and neural network the is designed and implemented for pAninI based semantic analysis, generating case frames as output. Index Terms pAninI Grammar framework, Knowledge Representation, Case Frame, Natural Language Processing, Semantic.
Combining Pyramid Pooling and Attention Mechanism for Pelvic MR Image Semantic Segmentaion
One of the time-consuming routine work for a radiologist is to discern anatomical structures from tomographic images. For assisting radiologists, this paper develops an automatic segmentation method for pelvic magnetic resonance (MR) images. The task has three major challenges 1) A pelvic organ can have various sizes and shapes depending on the axial image, which requires local contexts to segment correctly. 2) Different organs often have quite similar appearance in MR images, which requires global context to segment. 3) The number of available annotated images are very small to use the latest segmentation algorithms. To address the challenges, we propose a novel convolutional neural network called Attention-Pyramid network (APNet) that effectively exploits both local and global contexts, in addition to a data-augmentation technique that is particularly effective for MR images. In order to evaluate our method, we construct fine-grained (50 pelvic organs) MR image segmentation dataset, and experimentally confirm the superior performance of our techniques over the state-of-the-art image segmentation methods.
Development of the Perceived Stress Questionnaire: a new tool for psychosomatic research.
A 30-question Perceived Stress Questionnaire (PSQ) was validated, in Italian and English, among 230 subjects. Test-retest reliability was 0.82 for the General (past year or two) PSQ, while monthly Recent (past month) PSQs varied by a mean factor of 1.9 over 6 months; coefficient alpha > 0.9. General and/or Recent PSQ scores were associated with trait anxiety (r = 0.75), Cohen's Perceived Stress Scale (r = 0.73), depression (r = 0.56), self-rated stress (r = 0.56), and stressful life events (p < 0.05). The General PSQ was higher in in-patients than in out-patients (p < 0.05); both forms were correlated with a somatic complaints scale in a non-patient population (r > 0.5), and were higher, among 27 asymptomatic ulcerative colitis patients, in the seven who had rectal inflammation than in those with normal proctoscopy (p = 0.03). Factor analysis yielded seven factors, of which those reflecting interpersonal conflict and tension were significantly associated with health outcomes. The Perceived Stress Questionnaire may be a valuable addition to the armamentarium of psychosomatic researchers.
Towards Contactless, Low-Cost and Accurate 3D Fingerprint Identification
Human identification using fingerprint impressions has been widely studied and employed for more than 2000 years. Despite new advancements in the 3D imaging technologies, widely accepted representation of 3D fingerprint features and matching methodology is yet to emerge. This paper investigates 3D representation of widely employed 2D minutiae features by recovering and incorporating (i) minutiae height z and (ii) its 3D orientation φ information and illustrates an effective matching strategy for matching popular minutiae features extended in 3D space. One of the obstacles of the emerging 3D fingerprint identification systems to replace the conventional 2D fingerprint system lies in their bulk and high cost, which is mainly contributed from the usage of structured lighting system or multiple cameras. This paper attempts to addresses such key limitations of the current 3D fingerprint technologies bydeveloping the single camera-based 3D fingerprint identification system. We develop a generalized 3D minutiae matching model and recover extended 3D fingerprint features from the reconstructed 3D fingerprints. 2D fingerprint images acquired for the 3D fingerprint reconstruction can themselves be employed for the performance improvement and have been illustrated in the work detailed in this paper. This paper also attempts to answer one of the most fundamental questions on the availability of inherent discriminableinformation from 3D fingerprints. The experimental results are presented on a database of 240 clients 3D fingerprints, which is made publicly available to further research efforts in this area, and illustrate the discriminant power of 3D minutiae representation andmatching to achieve performance improvement.
Impact of genetic variability and treatment-related factors on outcome in early breast cancer patients receiving (neo-) adjuvant chemotherapy with 5-fluorouracil, epirubicin and cyclophosphamide, and docetaxel
To assess the impact of patient-related factors, including genetic variability in genes involved in the metabolism of chemotherapeutic agents, on breast cancer-specific survival (BCSS) and recurrence-free interval (RFI). We selected early breast cancer patients treated between 2000 and 2010 with 4–6 cycles of (neo-)adjuvant 5-fluorouracil, epirubicin, and cyclophosphamide (FEC) or 3 cycles FEC followed by 3 cycles docetaxel. Tumor stage/subtype; febrile neutropenia and patient-related factors such as selected single nucleotide polymorphisms and baseline laboratory parameters were evaluated. Multivariable Cox regression was performed. Of 991 patients with a mean follow-up of 5.2 years, 152 (15.3 %) patients relapsed and 63 (6.4 %) patients died. Advanced stage and more aggressive subtype were associated with poorer BCSS and RFI in multivariable analysis (p < 0.0001). Associations with worse BCSS in multivariable analysis were: homozygous carriers of the rs1057910 variant C-allele in CYP2C9 (hazard ratio [HR] 30.4; 95 % confidence interval [CI] 6.1–151.5; p < 0.001) and higher white blood cell count (WBC) (HR 1.2; 95 % CI 1.0–1.3; p = 0.014). The GT genotype of the ABCB1 variant rs2032582 was associated with better BCSS (HR 0.5; 95 % CI 0.3–0.9, p = 0.021). Following associations with worse RFI were observed: higher WBC (HR 1.1; 95 % CI 1.0–1.2; p = 0.026), homozygous carriers of the rs1057910 variant C-allele in CYP2C9 (HR 10.9; 95 % CI 2.5–47.9; p = 0.002), CT genotype of the CYBA variant rs4673 (HR 1.8; 95 % CI 1.2–2.7; p = 0.006), and G-allele homozygosity for the UGT2B7 variant rs3924194 (HR 3.4; 95 % CI 1.2–9.7, p = 0.023). Patient-related factors including genetic variability and baseline white blood cell count, impacted on outcome in early breast cancer.
Soil health and sustainability: managing the biotic component of soil quality
Soil health is the capacity of soil to function as a vital living system, within ecosystem and land-use boundaries, to sustain plant and animal productivity, maintain or enhance water and air quality, and promote plant and animal health. Anthropogenic reductions in soil health, and of individual components of soil quality, are a pressing ecological concern. A conference entitled ‘Soil Health: Managing the Biological Component of Soil Quality’ was held was held in the USA in November 1998 to help increase awareness of the importance and utility of soil organisms as indicators of soil quality and determinants of soil health. To evaluate sustainability of agricultural practices, assessment of soil health using various indicators of soil quality is needed. Soil organism and biotic parameters (e.g. abundance, diversity, food web structure, or community stability) meet most of the five criteria for useful indicators of soil quality. Soil organisms respond sensitively to land management practices and climate. They are well correlated with beneficial soil and ecosystem functions including water storage, decomposition and nutrient cycling, detoxification of toxicants, and suppression of noxious and pathogenic organisms. Soil organisms also illustrate the chain of cause and effect that links land management decisions to ultimate productivity and health of plants and animals. Indicators must be comprehensible and useful to land managers, who are the ultimate stewards of soil quality and soil health. Visible organisms such as earthworms, insects, and molds have historically met this criterion. Finally, indicators must be easy and inexpensive to measure, but the need for knowledge of taxonomy complicates the measurement of soil organisms. Several farmer-participatory programs for managing soil quality and health have incorporated abiotic and simple biotic indicators. The challenge for the future is to develop sustainable management systems which are the vanguard of soil health; soil quality indicators are merely a means towards this end. Published by Elsevier Science B.V.
Bone strength and density via pQCT in post-menopausal osteopenic women after 9 months resistive exercise with whole body vibration or proprioceptive exercise.
OBJECTIVES In order to better understand which training approaches are more effective for preventing bone loss in post-menopausal women with low bone mass, we examined the effect of a nine-month resistive exercise program with either an additional whole body vibration exercise (VIB) or balance training (BAL). METHODS 68 post-menopausal women with osteopenia were recruited for the study and were randomised to either the VIB or BAL group. Two training sessions per week were performed. 57 subjects completed the study (VIB n=26; BAL n=31). Peripheral quantitative computed tomography (pQCT) measurements of the tibia, fibula, radius and ulna were performed at baseline and at the end of the intervention period at the epiphysis (4% site) and diaphysis (66% site). Analysis was done on an intent-to-treat approach. RESULTS Significant increases in bone density and strength were seen at a number of measurement sites after the intervention period. No significant differences were seen in the response of the two groups at the lower-leg. CONCLUSIONS This study provided evidence that a twice weekly resistive exercise program with either additional balance or vibration training could increase bone density at the distal tibia after a nine-month intervention period in post-menopausal women with low bone mass.
Heart rate variability (HRV) in deep breathing tests and 5-min short-term recordings: agreement of ear photoplethysmography with ECG measurements, in 343 subjects
We analyzed heart rate variability (HRV) taken by ECG and photoplethysmography (PPG) to assess their agreement. We also analyzed the sensitivity and specificity of PPG to identify subjects with low HRV as an example of its potential use for clinical applications. The HRV parameters: mean heart rate (HR), amplitude, and ratio of heart rate oscillation (E–I difference, E/I ratio), RMSSD, SDNN, and Power LF, were measured during 1-min deep breathing tests (DBT) in 343 individuals, followed by a 5-min short-term HRV (s-HRV), where the HRV parameters: HR, SD1, SD2, SDNN, Stress Index, Power HF, Power LF, Power VLF, and Total Power, were determined as well. Parameters were compared through correlation analysis and agreement analysis by Bland–Altman plots. PPG derived parameters HR and SD2 in s-HRV showed better agreement than SD1, Power HF, and stress index, whereas in DBT HR, E/I ratio and SDNN were superior to Power LF and RMSSD. DBT yielded stronger agreement than s-HRV. A slight overestimation of PPG HRV over HCG HRV was found. HR, Total Power, and SD2 in the s-HRV, HR, Power LF, and SDNN in the DBT showed high sensitivity and specificity to detect individuals with poor HRV. Cutoff percentiles are given for the future development of PPG-based devices. HRV measured by PPG shows good agreement with ECG HRV when appropriate parameters are used, and PPG-based devices can be employed as an easy screening tool to detect individuals with poor HRV, especially in the 1-min DBT test.
Biometrics in banking security: a case study
Purpose To identify and discuss the issues and success factors surrounding biometrics, especially in the context of user authentication and controls in the banking sector, using a case study. Design/methodology/approach The literature survey and analysis of the security models of the present information systems and biometric technologies in the banking sector provide the theoretical and practical background for this work. The impact of adopting biometric solutions in banks was analysed by considering the various issues and challenges from technological, managerial, social and ethical angles. These explorations led to identifying the success factors that serve as possible guidelines for a viable implementation of a biometric enabled authentication system in banking organisations, in particular for a major bank in New Zealand. Findings As the level of security breaches and transaction frauds increase day by day, the need for highly secure identification and personal verification information systems is becoming extremely important especially in the banking and finance sector. Biometric technology appeals to many banking organisations as a near perfect solution to such security threats. Though biometric technology has gained traction in areas like healthcare and criminology, its application in banking security is still in its infancy. Due to the close association of biometrics to human, physical and behavioural aspects, such technologies pose a multitude of social, ethical and managerial challenges. The key success factors proposed through the case study served as a guideline for a biometric enabled security project called Bio Sec, which is envisaged in a large banking organisation in New Zealand. This pilot study reveals that more than coping with the technology issues of gelling biometrics into the existing information systems, formulating a viable security plan that addresses user privacy fears, human tolerance levels, organisational change and legal issues is of prime importance. Originality/value Though biometric systems are successfully adopted in areas such as immigration control and criminology, there is a paucity of their implementation and research pertaining to banking environments. Not all banks venture into biometric solutions to enhance their security systems due to their socio technological issues. This paper fulfils the need for a guideline to identify the various issues and success factors for a viable biometric implementation in a bank’s access control system. This work is only a starting point for academics to conduct more research in the application of biometrics in the various facets of banking businesses.
The Use of Control Charts in Health-Care and Public-Health Surveillance
Standard control charts are often recommended for use in the monitoring and improvement of hospital performance. For example, one might monitor infection rates, rates of patient falls, or waiting times of various sorts. See, for example, Benneyan (1998a,b), Lee and McGreevey (2002a), or Benneyan et al. (2003). There are several books on this topic, including Carey (2003), reviewed by Woodall (2004), Hart and Hart (2002), and Morton (2005). The more standard uses of control charts in hospital applications are not reviewed here even though improvements are widely needed, as discussed by Millenson (1999), the Institute of Medicine (2000), and others. We also do not discuss the monitoring of health-related variables for individual patients, as recommended, for example, by Alemi and Neuhauser (2004).
Chicken Farm Monitoring System
In this paper, the chicken farm monitoring system is proposed and developed based on wireless communication unit to transfer data by using the wireless module combined with the sensors that enable to detect temperature, humidity, light and water level values. This system is focused on the collecting, storing, and controlling the information of the chicken farm so that the high quality and quantity of the meal production can be produced. This system is developed to solve several problems in the chicken farm which are many human workers is needed to control the farm, high cost in maintenance, and inaccurate data collected at one point. The proposed methodology really helps in finishing this project within the period given. Based on the research that has been carried out, the system that can monitor and control environment condition (temperature, humidity, and light) has been developed by using the Arduino microcontroller. This system also is able to collect data and operate autonomously.
Efficient driver drowsiness detection at moderate levels of drowsiness.
Previous research on driver drowsiness detection has focused primarily on lane deviation metrics and high levels of fatigue. The present research sought to develop a method for detecting driver drowsiness at more moderate levels of fatigue, well before accident risk is imminent. Eighty-seven different driver drowsiness detection metrics proposed in the literature were evaluated in two simulated shift work studies with high-fidelity simulator driving in a controlled laboratory environment. Twenty-nine participants were subjected to a night shift condition, which resulted in moderate levels of fatigue; 12 participants were in a day shift condition, which served as control. Ten simulated work days in the study design each included four 30-min driving sessions, during which participants drove a standardized scenario of rural highways. Ten straight and uneventful road segments in each driving session were designated to extract the 87 different driving metrics being evaluated. The dimensionality of the overall data set across all participants, all driving sessions and all road segments was reduced with principal component analysis, which revealed that there were two dominant dimensions: measures of steering wheel variability and measures of lateral lane position variability. The latter correlated most with an independent measure of fatigue, namely performance on a psychomotor vigilance test administered prior to each drive. We replicated our findings across eight curved road segments used for validation in each driving session. Furthermore, we showed that lateral lane position variability could be derived from measured changes in steering wheel angle through a transfer function, reflecting how steering wheel movements change vehicle heading in accordance with the forces acting on the vehicle and the road. This is important given that traditional video-based lane tracking technology is prone to data loss when lane markers are missing, when weather conditions are bad, or in darkness. Our research findings indicated that steering wheel variability provides a basis for developing a cost-effective and easy-to-install alternative technology for in-vehicle driver drowsiness detection at moderate levels of fatigue.
Interpretive front-of-pack nutrition labels. Comparing competing recommendations
Many stakeholders support introducing an interpretive front-of-pack (FOP) nutrition label, but disagree over the form it should take. In late 2012, an expert working group established by the New Zealand government recommended the adoption of an untested summary rating system: a Star label. This study used a best-worst scaling choice experiment to estimate how labels featuring the new Star rating, the Multiple Traffic Light (MTL), Daily Intake Guide (DIG), and a no-FOP control affected consumers' choice behaviours and product perceptions. Nutrient-content and health claims were included in the design. We also assessed whether respondents who used more or less information during the choice tasks differed in their selection patterns. Overall, while respondents made broadly similar choices with respect to the MTL and Star labels, the MTL format had a significantly greater impact on depressing preference as a food's nutritional profile became less healthy. Health claims increased rankings of less nutritious options, though this effect was less pronounced when the products featured an MTL. Further, respondents were best able to differentiate products' healthiness with MTL labels. The proposed summary Stars system had less effect on choice patterns than an MTL label and our findings highlight the need for policy makers to ensure that decisions to introduce FOP labels are underpinned by robust research evidence. These results suggest that the proposed summary Stars system will have less effect on shifting food choice patterns than interpretive FOP nutrition label featuring traffic light ratings.
Urban health and society : interdisciplinary approaches to research and practice
Preface. Preface. The Contributors. PART ONE: INTRODUCTION. One: Frameworks for Interdisciplinary Urban Health Research and Practice (Nicholas Freudenberg, Susan Klitzman, and Susan Saegert). Introduction. The Implications of Urban Life for Health. Levels and Types of Interdisciplinarity. Conundrums in Interdisciplinarity. Interdisciplinarity and Theories of Knowledge. Methodological Challenges and Approaches to Interdisciplinarity. Interdisciplinarity: Which Disciplines When?. Role Definitions in Interdisciplinary Research and Practice. Multiple Levels of Intervention. Summary. Two: Environmental Justice Praxis: Implications for Interdisciplinary Urban Public Health (Tom Angotti, and Julie Sze). Environmental Justice and Public Health. The Built Environment, Urban Planning, and Urban Public Health. Environmental and Social Justice, Interdisciplinarity, and the Politics of Knowledge. Asthma and the Environmental Justice Campaign for a Solid Waste Plan in New York City. Asian Immigrant and Refugee Organizing for Environmental Health and Housing in the Bay Area. Conclusion. Summary. PART TWO: INTERDISCIPLINARY APPROACHES TO STUDYING CAUSES OF URBAN HEALTH PROBLEMS. Three: Interdisciplinary, Participatory Research on Urban Retail Food Environments and Dietary Behaviors (Shannon N. Zenk, Amy J. Schulz, Angela M. Odoms-Young, and Murlisa Lockett). Introduction. Determinants of Retail Food Environments in Cities. Using CBPR to Understand Health Implications of Detroit's Food Environment. Directions for Future Research. Summary. Four: An Ecological Model of Urban Child Health (Kim T. Ferguson, Pilyoung Kim, James R. Dunn, and Gary W. Evans). Introduction. An Ecological Model. Bronfenbrenner's Bioecological Model. Influences on Children's Health in the Urban Context. Research Across Multiple Levels. Agenda for Future Research and Practice. Toward a Holistic Understanding of Urban Child Health. Summary. Five: Geographic Information Systems, Environmental Justice, and Health Disparities (Juliana Maantay, Andrew R. Maroko, Carlos Alicea, and A. H. Strelnick). Introduction. Community-Based Participatory Research. Multilevel Models of Causation. Role of Geographic Information Systems. Environmental Justice and Health in the Bronx. Methods. Findings. Implications of Findings. Lessons on Interdisciplinary Approaches to Urban Health Research. Conclusion. Summary. Six: Racial Inequality in Health and Policy-Induced Breakdown of African American Communities (Arline T. Geronimus, and J. Phillip Thompson). Introduction. Racialized Ideologies: Developmentalism, Economism, and the American Creed. Implications for Public Policy. Building a Movement for Policy Reform. Summary. Seven: An Interdisciplinary and Social-Ecological Analysis of the U.S. Foreclosure Crisis as it Relates to Health (Susan Saegert, Kimberly Libman, Desiree Fields). Housing and Health: What's the Connection?. The Social Ecology of Foreclosure. The Research and Its Context. Focus Group Analysis and the Emergence of Health as an Issue. Foreclosure and Public Health. Neoliberalism, the Foreclosure Crisis, and Health Consequences. Summary. PART THREE: INTERDISCIPLINARY APPROACHES TO INTERVENTIONS TO PROMOTE URBAN HEALTH. Eight: Transdisciplinary Action Research on Teen Smoking Prevention. Juliana Fuqua, Daniel Stokols, Richard Harvey, Atusa Baghery, and Larry Jamner). Introduction. Review of Transdisciplinary Action Research. Transdisciplinary Action Research Cycle. Translating Transdisciplinary Research into Community Intervention and Policy. Factors Facilitating or Impeding Collaboration Among TPC Members. Implications and Lessons Learned from the TPC Study. Future Directions. Summary. Nine: How Vulnerabilities and Capacities Shape Population Health after Disasters (Craig Hadley, Sasha Rudenstine, and Sandro Galea). Social and Economic Determinants of Health After Disasters. Humanitarian Crises in Angola and the Balkans. Hurricane Katrina. September 11, 2001, Terrorist Attacks on New York City. Implications for Prevention and Intervention. Summary. Ten: Immigrants and Urban Aging: Toward a Policy Framework (Marianne Fahs, Anahi Viladrich, and Nina S. Parikh). The New Urban Demography: Baby Boomers and Immigrants. Economic and Social Influences on Aging and Health Policy. Social and Environmental Considerations. Toward a Conceptual Framework. A Public Health Research and Policy Agenda. Summary. Eleven: Reversing the Tide of Type 2 Diabetes Among African Americans Through Interdisciplinary Research (Hollie Jones, and Leandris C. Liburd). A Dialogue Between Two Disciplines: Psychology and Medical Anthropology. Ethnic Identity and the Experience of Being African American with Type 2 Diabetes. Interdisciplinary Research Methods. Integrating Social Psychology and Medical Anthropology to Reduce the Burden of Diabetes. Summary. PART FOUR: PUTTING INTERDISCIPLINARY APPROACHES INTO PRACTICE. Twelve: Using Interdisciplinary Approaches To Strengthen Urban Health Research And Practice (Nicholas Freudenberg, Susan Klitzman, and Susan Saegert). Doing Interdisciplinary Research and Practice. Defining the Problem. Creating a Process for Interdisciplinary Work. Choosing Institutional and Community Partners. Influencing Policy and Practice. Evaluating Impact. Wanted: Interdisciplinary Researchers and Practitioners. Summary. Glossary. Index.
Bringing Portability to the Software Process
Portability is recognized as a desirable attribute for the vast majority of software products. Yet the literature on portability techniques is sparse and largely anecdotal, and portability is typically achieved by ad hoc methods. This paper proposes a framework for incorporating portability considerations into the software process. Unlike reuse, portability can be effectively attained for individual projects, both large and small. Maximizing portability, however, is more than an implementation detail; it requires reexamination of every phase of the software lifecycle. Here we identify issues and propose guidelines for increasing and exploiting portability during each of the key activities of software development and maintenance.
Effects of NaCl salinity on seedling growth, senescence, catalase and protease activities in two wheat genotypes differing in salt tolerance
Changes in seedling growth, senescence, protease activities and possible involvement of hydrogen peroxide scavenging enzyme e.g., catalase, in relation to salt tolerance were investigated in two wheat genotypes differing in salt tolerance. The 3days old wheat seedlings were subjected to 5, 10 and 15dSm NaCl salinity for 6days. Data showed that salt-stress brought about a reduction of the growth and protein content, particularly at 15dSm NaCl salinity. Application of low salinity (5dSm) did not show marked effects, but under high NaCl stress growth was suppressed even in tolerant genotype. Overall good growth of wheat cultivar Lu-26 at seedling stage might be due to osmotic adjustment. Leaf senescence or decrease in leaf protein content was only observed at higher (15dSm) NaCl stress in salt sensitive wheat cultivar Pak-81. The magnitude of salt induced proteolysis was many folds higher in sensitive wheat genotype Pak-81 at 15dSm NaCl salinity. The prominent salt induced senescence in leaves of wheat cultivar Pak-81 was associated with higher salt sensitivity in terms of extensive proteolysis. Severe salt-stress resulted in an inhibition of the antioxidative enzyme catalase as revealed by spectrophotometric assay. Catalase activity was decreased at all salinity levels in both wheat cultivars signifying that high salinity generally reduced the catalase activity irrespective of wheat genotype. The results suggest that cv. Lu-26, exhibits a better protection mechanism against salinity as indicated by lower salt induced proteolysis, higher biomass accumulation and protein content than the relatively sensitive cv. Pak-81.
Quality measurement at intensive care units: which indicators should we use?
OBJECTIVE This study was conducted to develop a set of indicators that measure the quality of care in intensive care units (ICU) in Dutch hospitals and to evaluate the feasibility of the registration of these indicators. METHODS To define potential indicators for measuring quality, 3 steps were made. First, a literature search was carried out to obtain peer-reviewed articles from 2000 to 2005, describing process or structure indicators in intensive care, which are associated with patient outcome. Additional indicators were suggested by a panel of experts. Second, a selection of indicators was made by a panel of experts using a questionnaire and ranking in a consensus procedure. Third, a study was done for 6 months in 18 ICUs to evaluate the feasibility of using the identified quality indicators. Site visits, interviews, and written questionnaires were used to evaluate the use of indicators. RESULTS Sixty-two indicators were initially found, either in the literature or suggested by the members of the expert panel. From these, 12 indicators were selected by the expert panel by consensus. After the feasibility study, 11 indicators were eventually selected. "Interclinical transport," referring to a change of hospital, was dropped because of lack of reliability and support for further implementation by the participating hospitals in the study. The following structure indicators were selected: availability of intensivist (hours per day), patient-to-nurse ratio, strategy to prevent medication errors, measurement of patient/family satisfaction. Four process indicators were selected: length of ICU stay, duration of mechanical ventilation, proportion of days with all ICU beds occupied, and proportion of glucose measurement exceeding 8.0 mmol/L or lower than 2.2 mmol/L. The selected outcome indicators were as follows: standardized mortality (APACHE II), incidence of decubitus, number of unplanned extubations. The time for registration varied from less than 30 minutes to more than 1 hour per day to collect the items. Among other factors, this variation in workload was related to the availability of computerized systems to collect the data. CONCLUSION In this study, a set of 11 quality indicators for intensive care was defined based on literature research, expert opinion, and testing. The set gives a quick view of the quality of care in individual ICUs. The availability of a computerized data collection system is important for an acceptable workload.
Bone union rate with autologous iliac bone versus local bone graft in posterior lumbar interbody fusion (PLIF): a multicenter study
The purpose of this study is to compare bone union rate between autologous iliac bone and local bone graft in patients treated by posterior lumbar interbody fusion (PLIF) using carbon cage for single level interbody fusion. The subjects were 106 patients whose course could be observed for at least 2 years. The diagnosis was lumbar spinal canal stenosis in 46 patients, herniated lumbar disk in 12 patients and degenerative spondylolisthesis in 51 patients. Single interbody PLIF was done using iliac bone graft in 53 patients and local bone graft in 56 patients. Existence of pseudo-arthrosis on X-P (AP and lateral view) was investigated during the same follow up period. No significant differences were found in operation time and blood loss. Significant differences were also not observed in fusion grade at any follow up period or in fusion progression between the two groups. Donor site pain continued for more than 3 months in five cases (9 %). The final fusion rate was 96.3 versus 98.3 %. Almost the same results in fusion were obtained from both the local bone group and the autologous iliac bone group. Fusion progression was almost the same. Complications at donor sites were seen in 19 % of the cases. From the above results, it was concluded that local bone graft is as beneficial as autologous iliac bone graft for PLIF at a single level.
Reconfigurable Skyrmion Logic Gates.
Magnetic skyrmion, a nanosized spin texture with topological property, has become an area of significant interest due to the scientific insight that it can provide and also its potential impact on applications such as ultra-low-energy and ultra-high-density logic gates. In the quest for the reconfiguration of single logic device and the implementation of the complete logic functions, a novel reconfigurable skyrmion logic (RSL) is proposed and verified by micromagnetic simulations. Logic functions including AND, OR, NOT, NAND, NOR, XOR, and XNOR are implemented in the ferromagnetic (FM) nanotrack by virtue of various effects including spin orbit torque, skyrmion Hall effect, skyrmion-edge repulsions, and skyrmion-skyrmion collision. Different logic functions can be selected in an RSL by applying voltage to specific region(s) of the device, changing the local anisotropy energy of FM film. Material properties and geometrical scaling studies suggest RSL gates fit for energy-efficient computing as well as provide the guidelines for the design and optimization of this new logic family.
Bimodal biometric system for hand shape and palmprint recognition based on SIFT sparse representation
Biometric-based hand modality is considered as one of the most popular biometric technologies especially in forensic applications. In this paper, a bimodal hand identification system was proposed based on Scale Invariant Feature Transform (SIFT) descriptors, extracted from hand shape and palmprint modalities. A local sparse representation method was adopted in order to represent images with high discrimination. Moreover, fusion was performed at feature and decision levels using a cascade fusion in order to generate the final identification rate of our bimodal system. Our experiments were applied on two hand databases: Indian Institute of Technology of Delhi (IITD) hand database and Bosphorus hand database containing, respectively, 230 and 615 subjects. The results show that the proposed method offers high accuracies compared to other popular bimodal hand biometric methods over the two hand databases. The correct identification rate reaches 99.57 % which is competitive compared to systems existing in the literature.
Learning Multiagent Communication with Backpropagation
Many tasks in AI require the collaboration of multiple agents. Typically, the communication protocol between agents is manually specified and not altered during training. In this paper we explore a simple neural model, called CommNN, that uses continuous communication for fully cooperative tasks. The model consists of multiple agents and the communication between them is learned alongside their policy. We apply this model to a diverse set of tasks, demonstrating the ability of the agents to learn to communicate amongst themselves, yielding improved performance over non-communicative agents and baselines. In some cases, it is possible to interpret the language devised by the agents, revealing simple but effective strategies for solving the task at hand.
LSTM CCG Parsing
We demonstrate that a state-of-the-art parser can be built using only a lexical tagging model and a deterministic grammar, with no explicit model of bi-lexical dependencies. Instead, all dependencies are implicitly encoded in an LSTM supertagger that assigns CCG lexical categories. The parser significantly outperforms all previously published CCG results, supports efficient and optimal A∗ decoding, and benefits substantially from semisupervised tri-training. We give a detailed analysis, demonstrating that the parser can recover long-range dependencies with high accuracy and that the semi-supervised learning enables significant accuracy gains. By running the LSTM on a GPU, we are able to parse over 2600 sentences per second while improving state-of-the-art accuracy by 1.1 F1 in domain and up to 4.5 F1 out of domain.
Stratigraphy and Succession of the Rocks of the Sierra Nevada of California
Introduction. General Character of the Sierra Rocks .—The great mass of the Sierra Nevada consists of crystalline rocks (granites) and highly metamorphosed, tilted and dislocated sedimentary and eruptive rocks. There are less metamorphosed strata of later age (Cretaceous and Tertiary) on the western flank at and near the foot of the range, and Tertiary and Quaternary lavas and sediments deposited by streams occur on the slopes and even on crests and peaks, especially of the northern half of the range. But the great mass of the range is made up of granites and of sedimentary and eruptive rocks so highly metamorphosed as to be quite generally designated as the metamorphic rocks of the Sierra. J. D. Whitney showed in his report on the geology of California, and added confirmation in his work on the auriferous gravels of the Sierra Nevada, that a portion of these metamorphic rocks are of Mesozoic . . .
DuraCap: A supercapacitor-based, power-bootstrapping, maximum power point tracking energy-harvesting system
DuraCap is a solar-powered energy harvesting system that stores harvested energy in supercapacitors and is voltage-compatible with lithium-ion batteries. The use of supercapacitors instead of batteries enables DuraCap to extend the operational life time from tens of months to tens of years. DuraCap addresses two additional problems with micro-solar systems: inefficient operation of supercapacitors during cold booting, and maximum power point tracking (MPPT) over a variety of solar panels. Our approach is to dedicate a smaller supercapacitor to cold booting before handing over to the array of larger-value supercapacitors. For MPPT, we designed a bound-control circuit for PFM regulator switching and an I-V tracer to enable self-configuring over the panel's aging process and replacement. Experimental results show the DuraCap system to achieve high conversion efficiency and minimal downtime.
Evaluation of the desensitizing effect of Gluma Dentin Bond on teeth prepared for complete-coverage restorations.
This clinical trial assessed the ability of Gluma Dentin Bond to inhibit dentinal sensitivity in teeth prepared to receive complete cast restorations. Twenty patients provided 76 teeth for the study. Following tooth preparation, dentinal surfaces were coated with either sterile water (control) or two 30-second applications of Gluma Dentin Bond (test) on either intact or removed smear layers. Patients were recalled after 14 days for a test of sensitivity of the prepared dentin to compressed air, osmotic stimulus (saturated CaCl2 solution), and tactile stimulation via a scratch test under controlled loads. A significantly lower number of teeth responded to the test stimuli for both Gluma groups when compared to the controls (P less than .01). No difference was noted between teeth with smear layers intact or removed prior to treatment with Gluma.
Tutorial: Point Cloud Library: Three-Dimensional Object Recognition and 6 DOF Pose Estimation
With the advent of new-generation depth sensors, the use of three-dimensional (3-D) data is becoming increasingly popular. As these sensors are commodity hardware and sold at low cost, a rapidly growing group of people can acquire 3- D data cheaply and in real time.
Zipf’s Law in Passwords
Despite three decades of intensive research efforts, it remains an open question as to what is the underlying distribution of user-generated passwords. In this paper, we make a substantial step forward toward understanding this foundational question. By introducing a number of computational statistical techniques and based on 14 large-scale data sets, which consist of 113.3 million real-world passwords, we, for the first time, propose two Zipf-like models (i.e., PDF-Zipf and CDF-Zipf) to characterize the distribution of passwords. More specifically, our PDF-Zipf model can well fit the popular passwords and obtain a coefficient of determination larger than 0.97; our CDF-Zipf model can well fit the entire password data set, with the maximum cumulative distribution function (CDF) deviation between the empirical distribution and the fitted theoretical model being 0.49%~4.59% (on an average 1.85%). With the concrete knowledge of password distributions, we suggest a new metric for measuring the strength of password data sets. Extensive experimental results show the effectiveness and general applicability of the proposed Zipf-like models and security metric.
Ventilator associated pneumonia: evolving definitions and preventive strategies.
Ventilator-associated pneumonia (VAP) is one of the most frequent hospital-acquired infections occurring in intubated patients. Because VAP is associated with higher mortality, morbidity, and costs, there is a need to solicit further research for effective preventive measures. VAP has been proposed as an indicator of quality of care. Clinical diagnosis has been criticized to have poor accuracy and reliability. Thus, the Centers for Disease Control and Prevention has introduced a new definition based upon objective and recordable data. Institutions are nowadays reporting a VAP zero rate in surveillance programs, which is in discrepancy with clinical data. This reduction has been highlighted in epidemiological studies, but it can only be attributed to a difference in patient selection, since no additional intervention has been taken to modify pathogenic mechanisms in these studies. The principal determinant of VAP development is the presence of the endotracheal tube (ETT). Contaminated oropharyngeal secretions pool over the ETT cuff and subsequently leak down to the lungs through a hydrostatic gradient. Impairment of mucociliary motility and cough reflex cannot counterbalance with a proper clearance of secretions. Lastly, biofilm develops on the inner ETT surface and acts as a reservoir for microorganism inoculum to the lungs. New preventive strategies are focused on the improvement of secretions drainage and prevention of bacterial colonization. The influence of gravity on mucus flow and body positioning can facilitate the clearance of distal airways, with decreased colonization of the respiratory tract. A different approach proposes ETT modifications to limit the leakage of oropharyngeal secretions: subglottic secretion drainage and cuffs innovations have been addressed to reduce VAP incidence. Moreover, coated-ETTs have been shown to prevent biofilm formation, although there is evidence that ETT clearance devices (Mucus Shaver) are required to preserve the antimicrobial properties over time. Here, after reviewing the most noteworthy issues in VAP definition and pathophysiology, we will present the more interesting proposals for VAP prevention.
Noise Estimation from a Single Image
In order to work well, many computer vision algorithms require that their parameters be adjusted according to the image noise level, making it an important quantity to estimate. We show how to estimate an upper bound on the noise level from a single image based on a piecewise smooth image prior model and measured CCD camera response functions. We also learn the space of noise level functions how noise level changes with respect to brightness and use Bayesian MAP inference to infer the noise level function from a single image. We illustrate the utility of this noise estimation for two algorithms: edge detection and featurepreserving smoothing through bilateral filtering. For a variety of different noise levels, we obtain good results for both these algorithms with no user-specified inputs.
A Study of the Influence of Gaming Behavior on Academic Performance of IT College Students
Video-game playing is popular among college students. Cognitive and negative consequences have been studied frequently. However, little is known about the influence of gaming behavior on IT college students’ academic performance. An increasing number of college students take online courses, use social network websites for social interactions, and play video games online. To analyze the relationship between college students’ gaming behavior and their academic performance, a research model is proposed and a survey study is conducted. The study result of a multiple regression analysis shows that self-control capability, social interaction using face-to-face or phone communications, and playing video games using a personal computer make statistically significant contributions to the IT college students’ academic performance measured by GPA.
Variability in social reasoning: the influence of attachment security on the attribution of goals
Over the last half decade there has been a growing move to apply the methods and theory of cognitive development to questions regarding infants' social understanding. Though this combination has afforded exciting opportunities to better understand our species' unique social cognitive abilities, the resulting findings do not always lead to the same conclusions. For example, a growing body of research has found support for both universal similarity and individual differences in infants' social reasoning about others' responses to incomplete goals. The present research examines this apparent contradiction by assessing the influence of attachment security on the ability of university undergraduates to represent instrumental needs versus social-emotional distress. When the two varieties of goals were clearly differentiated, we observed a universally similar pattern of results (Experiments 1A/B). However, when the goals were combined, and both instrumental need and social-emotional distress were presented together, individual differences emerged (Experiments 2 and 3). Taken together, these results demonstrate that by integrating the two perspectives of shared universals and individual differences, important points of contact can be revealed supporting a deeper, more nuanced understanding of the nature of human social reasoning.
Motor skill performance and physical activity in preschool children.
Children with better-developed motor skills may find it easier to be active and engage in more physical activity (PA) than those with less-developed motor skills. The purpose of this study was to examine the relationship between motor skill performance and PA in preschool children. Participants were 80 three- and 118 four-year-old children. The Children's Activity and Movement in Preschool Study (CHAMPS) Motor Skill Protocol was used to assess process characteristics of six locomotor and six object control skills; scores were categorized as locomotor, object control, and total. The actigraph accelerometer was used to measure PA; data were expressed as percent of time spent in sedentary, light, moderate-to-vigorous PA (MVPA), and vigorous PA (VPA). Children in the highest tertile for total score spent significantly more time in MVPA (13.4% vs. 12.8% vs. 11.4%) and VPA (5% vs. 4.6% vs. 3.8%) than children in middle and lowest tertiles. Children in the highest tertile of locomotor scores spent significantly less time in sedentary activity than children in other tertiles and significantly more time in MVPA (13.4% vs. 11.6%) and VPA (4.9% vs. 3.8%) than children in the lowest tertile. There were no differences among tertiles for object control scores. Children with poorer motor skill performance were less active than children with better-developed motor skills. This relationship between motor skill performance and PA could be important to the health of children, particularly in obesity prevention. Clinicians should work with parents to monitor motor skills and to encourage children to engage in activities that promote motor skill performance.
Efficient Learning of Timeseries Shapelets
In timeseries classification, shapelets are subsequences of timeseries with high discriminative power. Existing methods perform a combinatorial search for shapelet discovery. Even with speedup heuristics such as pruning, clustering, and dimensionality reduction, the search remains computationally expensive. In this paper, we take an entirely different approach and reformulate the shapelet discovery task as a numerical optimization problem. In particular, the shapelet positions are learned by combining the generalized eigenvector method and fused lasso regularizer to encourage a sparse and blocky solution. Extensive experimental results show that the proposed method is orders of magnitudes faster than the state-of-the-art shapelet-based methods, while achieving comparable or even better classification accuracy.
A Motivational Model of Rural Students' Intentions To Persist In, versus Drop Out Of, High School.
Using self-determination theory, the authors tested a motivational model to explain the conditions under which rural students formulate their intentions to persist in, versus drop out of, high school. The model argues that motivational variables underlie students’ intentions to drop out and that students’ motivation can be either supported in the classroom by autonomy-supportive teachers or frustrated by controlling teachers. LISREL analyses of questionnaire data from 483 rural high school students showed that the provision of autonomy support within classrooms predicted students’ self-determined motivation and perceived competence. These motivational resources, in turn, predicted students’ intentions to persist, versus drop out, and they did so even after controlling for the effect of achievement.
Efficiency of three forward-pruning techniques in shogi: Futility pruning, null-move pruning, and Late Move Reduction (LMR)
The efficiency of three forward-pruning techniques, i.e., futility pruning, null-move pruning, and LMR, is analyzed in shogi, a Japanese chess variant. It is shown that the techniques with the a–b pruning reduce the effective branching factor of shogi endgames to 2.8 without sacrificing much accuracy of the search results. Because the average number of the raw branching factor in shogi is around 80, the pruning techniques reduce the search space more effectively than in chess. 2011 International Federation for Information Processing Published by Elsevier B.V. All rights reserved.
Clustering on the Unit Hypersphere using von Mises-Fisher Distributions
Several large scale data mining applications, such as text c ategorization and gene expression analysis, involve high-dimensional data that is also inherentl y directional in nature. Often such data is L2 normalized so that it lies on the surface of a unit hyperspher e. Popular models such as (mixtures of) multi-variate Gaussians are inadequate for characteri zing such data. This paper proposes a generative mixture-model approach to clustering directional data based on the von Mises-Fisher (vMF) distribution, which arises naturally for data distributed on the unit hypersphere. In particular, we derive and analyze two variants of the Expectation Maximiza tion (EM) framework for estimating the mean and concentration parameters of this mixture. Nume rical estimation of the concentration parameters is non-trivial in high dimensions since it i nvolves functional inversion of ratios of Bessel functions. We also formulate two clustering algorit hms corresponding to the variants of EM that we derive. Our approach provides a theoretical basis fo r the use of cosine similarity that has been widely employed by the information retrieval communit y, and obtains the spherical kmeans algorithm (kmeans with cosine similarity) as a special case of both variants. Empirical results on clustering of high-dimensional text and gene-expression d ata based on a mixture of vMF distributions show that the ability to estimate the concentration pa rameter for each vMF component, which is not present in existing approaches, yields superior resu lts, especially for difficult clustering tasks in high-dimensional spaces.
Strength in Numbers: How does data-driven decision-making affect firm performance?
We examine whether performance is higher in firms that emphasize decisionmaking based on data and business analytics (which we term a data-driven decisionmaking approach or DDD). Using detailed survey data on the business practices and information technology investments of 179 large publicly traded firms, we find that firms that adopt DDD have output and productivity that is 5-6% higher than what would be expected given their other investments and information technology usage. Using instrumental variables methods, we find evidence that these effects do not appear to be due to reverse causality. Furthermore, the relationship between DDD and performance also appears in other performance measures such as asset utilization, return on equity and market value. Our results provide some of the first large scale data on the direct connection between data-driven decisionmaking and firm performance. Acknowledgements: We thank Andrew McAfee, Roger Robert, Johnson Sikes and participants at the Workshop for Information Systems and Economics and participants at the 9 th Annual Industrial Organization Conference for useful comments and the MIT Center for Digital Business for generous
LOSSCALC V 2 : DYNAMIC PREDICTION OF LGD
LossCalcTM version 2.0 is the Moody's KMV model to predict loss given default (LGD) or (1 recovery rate). Lenders and investors use LGD to estimate future credit losses. LossCalc is a robust and validated model of LGD for loans, bonds, and preferred stocks for the US, Canada, the UK, Continental Europe, Asia, and Latin America. It projects LGD for defaults occurring immediately and for defaults that may occur in one year. LossCalc is a statistical model that incorporates information at different levels: collateral, instrument, firm, industry, country, and the macroeconomy to predict LGD. It significantly improves on the use of historical recovery averages to predict LGD, helping institutions to better price and manage credit risk. LossCalc is built on a global dataset of 3,026 recovery observations for loans, bonds, and preferred stock from 1981-2004. This dataset includes over 1,424 defaults of both public and private firms—both rated and unrated instruments—in all industries. LossCalc will help institutions better manage their credit risk and can play a critical role in meeting the Basel II requirements on advanced Internal Ratings Based Approach. This paper describes Moody's KMV LossCalc, its predictive factors, the modeling approach, and its out of-time and out of-sample model validation. AUTHORS Greg M. Gupton Roger M. Stein
Comparison of manual therapy and exercise therapy in osteoarthritis of the hip: a randomized clinical trial.
OBJECTIVE To determine the effectiveness of a manual therapy program compared with an exercise therapy program in patients with osteoarthritis (OA) of the hip. METHODS A single-blind, randomized clinical trial of 109 hip OA patients was carried out in the outpatient clinic for physical therapy of a large hospital. The manual therapy program focused on specific manipulations and mobilization of the hip joint. The exercise therapy program focused on active exercises to improve muscle function and joint motion. The treatment period was 5 weeks (9 sessions). The primary outcome was general perceived improvement after treatment. Secondary outcomes included pain, hip function, walking speed, range of motion, and quality of life. RESULTS Of 109 patients included in the study, 56 were allocated to manual therapy and 53 to exercise therapy. No major differences were found on baseline characteristics between groups. Success rates (primary outcome) after 5 weeks were 81% in the manual therapy group and 50% in the exercise group (odds ratio 1.92, 95% confidence interval 1.30, 2.60). Furthermore, patients in the manual therapy group had significantly better outcomes on pain, stiffness, hip function, and range of motion. Effects of manual therapy on the improvement of pain, hip function, and range of motion endured after 29 weeks. CONCLUSION The effect of the manual therapy program on hip function is superior to the exercise therapy program in patients with OA of the hip.
NORTH AMERICAN NORTHERN LANDS
people would doubt that northern North America needs more people. Nor is there much question that the region can support and will have a greater population in the near future. Further, the usual assumption is that the additional people should or would be permanent rather than temporary inhabitants. It is necessary therefore to consider the human geography, or the locational characteristics of the present and future population distribution. This analysis reveals the significances of the relative locations of people to people and land to people. In this broad field the following topics and problems have been selected to demonstrate the great range and promise of such research in northern North America.