title
stringlengths
8
300
abstract
stringlengths
0
10k
Conformal Prediction Using Decision Trees
Conformal prediction is a relatively new framework in which the predictive models output sets of predictions with a bound on the error rate, i.e., in a classification context, the probability of excluding the correct class label is lower than a predefined significance level. An investigation of the use of decision trees within the conformal prediction framework is presented, with the overall purpose to determine the effect of different algorithmic choices, including split criterion, pruning scheme and way to calculate the probability estimates. Since the error rate is bounded by the framework, the most important property of conformal predictors is efficiency, which concerns minimizing the number of elements in the output prediction sets. Results from one of the largest empirical investigations to date within the conformal prediction framework are presented, showing that in order to optimize efficiency, the decision trees should be induced using no pruning and with smoothed probability estimates. The choice of split criterion to use for the actual induction of the trees did not turn out to have any major impact on the efficiency. Finally, the experimentation also showed that when using decision trees, standard inductive conformal prediction was as efficient as the recently suggested method cross-conformal prediction. This is an encouraging results since cross-conformal prediction uses several decision trees, thus sacrificing the interpretability of a single decision tree.
Intraperitoneal catheter outcomes in a phase III trial of intravenous versus intraperitoneal chemotherapy in optimal stage III ovarian and primary peritoneal cancer: a Gynecologic Oncology Group Study.
OBJECTIVES To evaluate reasons for discontinuing intraperitoneal (IP) chemotherapy, and to compare characteristics of patients who did versus did not successfully complete six cycles of IP chemotherapy. METHODS In a phase III trial, women with optimal stage III ovarian or peritoneal carcinoma were randomly allocated to receive IP therapy (paclitaxel 135 mg/m(2) intravenously (IV) over 24 h, cisplatin 100 mg/m(2) IP day 2, paclitaxel 60 mg/m(2) IP day 8) every 21 days for six cycles. Patients unable to receive IP therapy were treated with the alternate (IV) regimen. Variables compared included surgical procedures prior to enrollment, timing of IP catheter insertion, and primary and contributing reasons for discontinuing IP therapy. RESULTS Among 205 eligible patients randomly allocated to the IP arm, 119 (58%) did not complete six cycles of IP therapy. Forty (34%) patients discontinued IP therapy primarily due to catheter complications and 34 (29%) discontinued for unrelated reasons. Hysterectomy, appendectomy, small bowel resection, and ileocecal resection were not associated with failure to complete six cycles. IP therapy was not initiated in 16% of patients who did versus 5% of those who did not have a left colon or rectosigmoid colon resection (P = 0.015). There was no association between timing of catheter insertion and failure to complete IP therapy. CONCLUSIONS In this multi-institutional setting, it was difficult to deliver six cycles of IP therapy without complications. There appears to be an association between rectosigmoid colon resection and the inability to initiate IP therapy. Catheter choice, timing of insertion, and how surgical treatment of ovarian cancer influences the successful completion of intraperitoneal chemotherapy require further study.
Computational approaches to motor control
This review will focus on four areas of motor control which have recently been enriched both by neural network and control system models: motor planning, motor prediction, state estimation and motor learning. We will review the computational foundations of each of these concepts and present specific models which have been tested by psychophysical experiments. We will cover the topics of optimal control for motor planning, forward models for motor prediction, observer models of state estimation arid modular decomposition in motor learning. The aim of this review is to demonstrate how computational approaches, as well as proposing specific models, provide a theoretical framework to formalize the issues in motor control.
Semantic Frame-Based Document Representation for Comparable Corpora
Document representation is a fundamental problem for text mining. Many efforts have been done to generate concise yet semantic representation, such as bag-of-words, phrase, sentence and topic-level descriptions. Nevertheless, most existing techniques counter difficulties in handling monolingual comparable corpus, which is a collection of monolingual documents conveying the same topic. In this paper, we propose the use of frame, a high-level semantic unit, and construct frame-based representations to semantically describe documents by bags of frames, using an information network approach. One major challenge in this representation is that semantically similar frames may be of different forms. For example, "radiation leaked" in one news article can appear as "the level of radiation increased" in another article. To tackle the problem, a text-based information network is constructed among frames and words, and a link-based similarity measure called SynRank is proposed to calculate similarity between frames. As a result, different variations of the semantically similar frames are merged into a single descriptive frame using clustering, and a document can then be represented as a bag of representative frames. It turns out that frame-based document representation not only is more interpretable, but also can facilitate other text analysis tasks such as event tracking effectively. We conduct both qualitative and quantitative experiments on three comparable news corpora, to study the effectiveness of frame-based document representation and the similarity measure SynRank, respectively, and demonstrate that the superior performance of frame-based document representation on different real-world applications.
Towards Literate Artificial Intelligence
Speaker: Mrinmaya Sachan Thesis Committee: Eric P. Xing, Chair Jaime Carbonell Tom Mitchell Dan Roth (University of Pennsylvania) Thesis Proposal Jan. 22, 2018 1:00pm 8102 GHC Link to draft document: http://www.cs.cmu.edu/~mrinmays/thesis_proposal.pdf Standardized tests are often used to test students as they progress in the formal education system. These tests are widely available and measurable with clear evaluation procedures and metrics. Hence, these can serve as good tests for AI. We propose approaches for solving some of these tests. We broadly categorize these tests into two categories: open domain question answering tests such as reading comprehensions and elementary school science tests, and closed domain question answering tests such as intermediate or advanced math and science tests. We present alignment based approach with multi-task learning for the former. For closed domain tests, we propose a parsing to programs approach which can be seen as a natural language interface to expert systems. We also describe approaches for question generation based on instructional material in both open domain as well as closed domain settings. Finally, we show that we can improve both the question answering and question generation models by learning them jointly. This mechanism also allows us to leverage cheap unlabelled data for learning the two models. Our work can potentially be applied for the social good in the education domain. We perform studies on human subjects who found our approaches useful as assistive tools in education.
Energy Efficient Cooperative Computing in Mobile Wireless Sensor Networks
Advances in future computing to support emerging sensor applications are becoming more important as the need to better utilize computation and communication resources and make them energy efficient. As a result, it is predicted that intelligent devices and networks, including mobile wireless sensor networks (MWSN), will become the new interfaces to support future applications. In this paper, we propose a novel approach to minimize energy consumption of processing an application in MWSN while satisfying a certain completion time requirement. Specifically, by introducing the concept of cooperation, the logics and related computation tasks can be optimally partitioned, offloaded and executed with the help of peer sensor nodes, thus the proposed solution can be treated as a joint optimization of computing and networking resources. Moreover, for a network with multiple mobile wireless sensor nodes, we propose energy efficient cooperation node selection strategies to offer a tradeoff between fairness and energy consumption. Our performance analysis is supplemented by simulation results to show the significant energy saving of the proposed solution.
Structure-Function Analysis of DipA, a Francisella tularensis Virulence Factor Required for Intracellular Replication
Francisella tularensis is a highly infectious bacterium whose virulence relies on its ability to rapidly reach the macrophage cytosol and extensively replicate in this compartment. We previously identified a novel Francisella virulence factor, DipA (FTT0369c), which is required for intramacrophage proliferation and survival, and virulence in mice. DipA is a 353 amino acid protein with a Sec-dependent signal peptide, four Sel1-like repeats (SLR), and a C-terminal coiled-coil (CC) domain. Here, we determined through biochemical and localization studies that DipA is a membrane-associated protein exposed on the surface of the prototypical F. tularensis subsp. tularensis strain SchuS4 during macrophage infection. Deletion and substitution mutagenesis showed that the CC domain, but not the SLR motifs, of DipA is required for surface exposure on SchuS4. Complementation of the dipA mutant with either DipA CC or SLR domain mutants did not restore intracellular growth of Francisella, indicating that proper localization and the SLR domains are required for DipA function. Co-immunoprecipitation studies revealed interactions with the Francisella outer membrane protein FopA, suggesting that DipA is part of a membrane-associated complex. Altogether, our findings indicate that DipA is positioned at the host-pathogen interface to influence the intracellular fate of this pathogen.
A Vector Space Approach for Aspect Based Sentiment Analysis
Vector representations for language have been shown to be useful in a number of Natural Language Processing (NLP) tasks. In this thesis, we aim to investigate the effectiveness of word vector representations for the research problem of Aspect-Based Sentiment Analysis (ABSA), which attempts to capture both semantic and sentiment information encoded in user generated content such as product reviews. In particular, we target three ABSA sub-tasks: aspect term extraction, aspect category detection, and aspect sentiment prediction. We investigate the effectiveness of vector representations over different text data, and evaluate the quality of domain-dependent vectors. We utilize vector representations to compute various vector-based features and conduct extensive experiments to demonstrate their effectiveness. Using simple vector-based features, we achieve F1 scores of 79.9% for aspect term extraction, 86.7% for category detection, and 72.3% for aspect sentiment prediction. Co Thesis Supervisor: James Glass Title: Senior Research Scientist Co Thesis Supervisor: Mitra Mohtarami Title: Postdoctoral Associate 3
The molecular cell death machinery in the simple cnidarian Hydra includes an expanded caspase family and pro- and anti-apoptotic Bcl-2 proteins
The fresh water polyp Hydra belongs to the phylum Cnidaria, which diverged from the metazoan lineage before the appearance of bilaterians. In order to understand the evolution of apoptosis in metazoans, we have begun to elucidate the molecular cell death machinery in this model organism. Based on ESTs and the whole Hydra genome assembly, we have identified 15 caspases. We show that one is activated during apoptosis, four have characteristics of initiator caspases with N-terminal DED, CARD or DD domain and two undergo autoprocessing in vitro. In addition, we describe seven Bcl-2-like and two Bak-like proteins. For most of the Bcl-2 family proteins, we have observed mitochondrial localization. When expressed in mammalian cells, HyBak-like 1 and 2 strongly induced apoptosis. Six of the Bcl-2 family members inhibited apoptosis induced by camptothecin in mammalian cells with HyBcl-2-like 4 showing an especially strong protective effect. This protein also interacted with HyBak-like 1 in a yeast two-hybrid assay. Mutation of the conserved leucine in its BH3 domain abolished both the interaction with HyBak-like 1 and the anti-apoptotic effect. Moreover, we describe novel Hydra BH-3-only proteins. One of these interacted with Bcl-2-like 4 and induced apoptosis in mammalian cells. Our data indicate that the evolution of a complex network for cell death regulation arose at the earliest and simplest level of multicellular organization, where it exhibited a substantially higher level of complexity than in the protostome model organisms Caenorhabditis and Drosophila.
Privacy-Preserving User-Auditable Pseudonym Systems
Personal information is often gathered and processed in a decentralized fashion. Examples include health records and governmental data bases. To protect the privacy of individuals, no unique user identifier should be used across the different databases. At the same time, the utility of the distributed information needs to be preserved which requires that it be nevertheless possible to link different records if they relate to the same user. Recently, Camenisch and Lehmann (CCS 15) have proposed a pseudonym scheme that addresses this problem by domain-specific pseudonyms. Although being unlinkable, these pseudonyms can be converted by a central authority (the converter). To protect the users' privacy, conversions are done blindly without the converter learning the pseudonyms or the identity of the user. Unfortunately, their scheme sacrifices a crucial privacy feature: transparency. Users are no longer able to inquire with the converter and audit the flow of their personal data. Indeed, such auditability appears to be diametral to the goal of blind pseudonym conversion. In this paper we address these seemingly conflicting requirements and provide a system where user-centric audits logs are created by the oblivious converter while maintaining all privacy properties. We prove our protocol to be UC-secure and give an efficient instantiation using novel building blocks.
A double-blind, placebo-controlled study of sertraline with naltrexone for alcohol dependence.
INTRODUCTION Significant preclinical evidence exists for a synergistic interaction between the opioid and the serotonin systems in determining alcohol consumption. Naltrexone, an opiate receptor antagonist, is approved for the treatment of alcohol dependence. This double-blind placebo-controlled study examined whether the efficacy of naltrexone would be augmented by concurrent treatment with sertraline, a selective serotonin receptor uptake inhibitor (SSRI). METHODS One hundred and thirteen participants meeting DSM IV alcohol dependence criteria, who were abstinent from alcohol between 5 and 30 days, were randomly assigned to receive one of two treatments at two sites. One group received naltrexone 12.5 mg once daily for 3 days, 25 mg once daily for 4 days, and 50 mg once daily for the next 11 weeks, together with placebo sertraline. The other group received naltrexone as outlined and simultaneously received sertraline 50 mg once daily for 2 weeks, followed by 100 mg once daily for 10 weeks. Both groups received group relapse prevention psychotherapy on a weekly basis. RESULTS Compliance and attendance rates were comparable and high. The groups did not differ on the two primary outcomes, time to first drink and time to relapse to heavy drinking, or on secondary treatment outcomes. With the exception of sexual side effects which were more common in the combination group, most adverse events were similar for the two conditions. CONCLUSIONS As the doses are tested in combination with specialized behavioral therapy, this study does not provide sufficient evidence for the combined use of sertraline and naltrexone above naltrexone alone.
On Multi-Layer Basis Pursuit, Efficient Algorithms and Convolutional Neural Networks
Parsimonious representations are ubiquitous in modeling and processing information. Motivated by the recent Multi-Layer Convolutional Sparse Coding (ML-CSC) model, we herein generalize the traditional Basis Pursuit problem to a multi-layer setting, introducing similar sparse enforcing penalties at different representation layers in a symbiotic relation between synthesis and analysis sparse priors. We explore different iterative methods to solve this new problem in practice, and we propose a new Multi-Layer Iterative Soft Thresholding Algorithm (ML-ISTA), as well as a fast version (ML-FISTA). We show that these nested first order algorithms converge, in the sense that the function value of near-fixed points can get arbitrarily close to the solution of the original problem. We further show how these algorithms effectively implement particular recurrent convolutional neural networks (CNNs) that generalize feed-forward ones without introducing any parameters. We present and analyze different architectures resulting unfolding the iterations of the proposed pursuit algorithms, including a new Learned ML-ISTA, providing a principled way to construct deep recurrent CNNs. Unlike other similar constructions, these architectures unfold a global pursuit holistically for the entire network. We demonstrate the emerging constructions in a supervised learning setting, consistently improving the performance of classical CNNs while maintaining the number of parameters constant.
Sentence Level Discourse Parsing using Syntactic and Lexical Information
We introduce two probabilistic models that can be used to identify elementary discourse units and build sentence-level discourse parse trees. The models use syntactic and lexical features. A discourse parsing algorithm that implements these models derives discourse parse trees with an error reduction of 18.8% over a state-ofthe-art decision-based discourse parser. A set of empirical evaluations shows that our discourse parsing model is sophisticated enough to yield discourse trees at an accuracy level that matches near-human levels of performance.
Effect of anesthesia and cerebral blood flow on neuronal injury in a rat middle cerebral artery occlusion (MCAO) model
Middle cerebral artery occlusion (MCAO) models have become well established as the most suitable way to simulate stroke in experimental studies. The high variability in the size of the resulting infarct due to filament composition, rodent strain and vessel anatomy makes the setup of such models very complex. Beside controllable variables of homeostasis, the choice of anesthetics and the grade of ischemia and reperfusion played a major role for extent of neurological injury. Transient MCAO was induced during either isoflurane or ketamine/xylazine (ket/xyl) anesthesia with simultaneously measurement of cerebral blood flow (CBF) in 60 male Wistar rats (380–420 g). Neurological injury was quantified after 24 h. Isoflurane compared with ket/xyl improved mortality 24 h after MCAO (10 vs. 50 %, p = 0.037) and predominantly led to striatal infarcts (78 vs. 18 %, p = 0.009) without involvement of the neocortex and medial caudoputamen. Independent of anesthesia type, cortical infarcts could be predicted with a sensitivity of 67 % and a specificity of 100 % if CBF did not exceed 35 % of the baseline value during ischemia. In all other cases, cortical infarcts developed if the reperfusion values remained below 50 %. Hyperemia during reperfusion significantly increased infarct and edema volumes. The cause of frequent striatal infarcts after isoflurane anesthesia might be attributed to an improved CBF during ischemia (46 ± 15 % vs. 35 ± 19 %, p = 0.04). S-100β release, edema volume and upregulation of IL-6 and IL-1β expression were impeded by isoflurane. Thus, anesthetic management as well as the grade of ischemia and reperfusion after transient MCAO demonstrated important effects on neurological injury.
Arginine-rich cell-penetrating peptides.
Arginine-rich cell-penetrating peptides are short cationic peptides capable of traversing the plasma membranes of eukaryotic cells. While successful intracellular delivery of many biologically active macromolecules has been accomplished using these peptides, their mechanisms of cell entry are still under investigation. Recent dialogue has centered on a debate over the roles that direct translocation and endocytotic pathways play in internalization of cell-penetrating peptides. In this paper, we review the evidence for the broad range of proposed mechanisms, and show that each distinct process requires negative Gaussian membrane curvature as a necessary condition. Generation of negative Gaussian curvature by cell-penetrating peptides is directly related to their arginine content. We illustrate these concepts using HIV TAT as an example.
Software project initiation and planning - an empirical study
This paper describes a study of 14 software companies, on how they initiate and pre-plan software projects. The aim was to obtain an indication of the range of planning activities carried out. The study, using a convenience sample, was carried out using structured interviews, with questions about early software project planning activities. The study offers evidence that an iterative and incremental development process presents extra difficulties in the case of fixedcontract projects. We also found evidence that feasibility studies were common, but generally informal in nature. Documentation of the planning process, especially for project scoping, was variable. For Incremental and Iterative Development projects, an upfront decision on software architecture was shown to be preferred over allowing the architecture to just „emerge‟. There is also evidence that risk management is recognised but often performed incompletely. Finally appropriate future research arising from the study is described.
Nigrostriatal dopaminergic function in subjects with isolated action tremor.
BACKGROUND Isolated action tremor (IAT) is the hallmark clinical feature of essential tremor (ET), but it may also be a prominent feature of some individuals with Parkinson's disease (PD) suggesting a pathogenic relationship between these two disorders. OBJECTIVES We investigated the integrity of the striatal presynaptic dopaminergic system in subjects presenting IAT to improve the diagnostic accuracy and to explore any putative relationships between ET and PD. METHODS The striatal dopaminergic system was examined by means of dopamine transporter imaging using (123)I-(fluoropropyl)-2ß-carbomethoxy-3ß-(4-iodophenyl) nortropane ([123I]-FP-CIT) single-photon emission tomography (DAT-SPECT) in a clinical series of individuals with IAT, excluding those with associated resting tremor and bradykinesia. RESULTS Among 167 incidental individuals with IAT eligible for DAT-SPECT (Male/Female = 58.9/41.1%; Age = 67.8 ± 14.3 years), reduced striatal uptake was observed in 114 out of 167 (68.3%), whereas normal striatal binding was observed in the remaining 53 subjects (31.7%). Onset of tremor after 50 years and asymmetrical distribution of tremor were predictive variables of nigrostriatal denervation, whereas gender, family history and the presence of intentional, cephalic or voice tremors were not associated with nigrostriatal denervation. CONCLUSIONS Our findings suggest that IAT is a frequent presenting symptom in a subset of individuals with PD, often misdiagnosed as ET, and that DAT-SPECT can help differentiate between these two disorders. Current diagnostic criteria for ET should be revised to include asymmetry and late-onset tremor as predictors of nigrostriatal denervation.
Genealogical analysis as a new approach for the investigation of drug intolerance heritability
Genealogical analysis has proven a useful method to understand the origins and frequencies of hereditary diseases in many populations. However, this type of analysis has not yet been used for the investigation of drug intolerance among patients suffering from inherited disorders. This study aims to do so, using data from familial hypercholesterolemia (FH) patients receiving high doses of statins. The objective is to measure and compare various genealogical parameters that could shed light on the origins and heritability of muscular intolerance to statins using FH as a model. Analysis was performed on 224 genealogies from 112 FH subjects carrying either the low-density lipoprotein receptor (LDLR) prom_e1 deletion>15 kb (n=28) or c.259T>G (p.Trp87Gly) (n=84) mutations and 112 non-FH controls. Number of ancestors, geographical origins and genetic contribution of founders, inbreeding and kinship coefficients were calculated using the S-Plus-based GENLIB software package. For both mutations, repeated occurrences of the same ancestors are more frequent among the carriers’ genealogies than among the controls’, but no difference was observed between tolerant and intolerant subjects. Founders who may have introduced both mutations in the population appear with approximately the same frequencies in all genealogies. Kinship coefficients are higher among carriers, with no difference according to statins tolerance. Inbreeding coefficients are slightly lower among >15-kb deletion carriers than among c.259 T>G carriers, but the differences between tolerants and intolerants are not significant. These findings suggest that although muscular intolerance to statins shows a family aggregation, it is not transmitted through the same Mendelian pattern as LDLR mutations.
A Smart Irrigation and Monitoring System
Internet of Things, commonly known as IoT is a promising area in technology that is growing day by day. It is a concept whereby devices connect with each other or to living things. Internet of Things has shown its great benefits in today’s life. Agriculture is one amongst the sectors which contributes a lot to the economy of Mauritius and to get quality products, proper irrigation has to be performed. Hence proper water management is a must because Mauritius is a tropical island that has gone through water crisis since the past few years. With the concept of Internet of Things and the power of the cloud, it is possible to use low cost devices to monitor and be informed about the status of an agricultural area in real time. Thus, this paper provides the design and implementation of a Smart Irrigation and Monitoring System which makes use of Microsoft Azure machine learning to process data received from sensors in the farm and weather forecasting data to better inform the farmers on the appropriate moment to start irrigation. The Smart Irrigation and Monitoring System is made up of sensors which collect data such as air humidity, air temperature, and most importantly soil moisture data. These data are used to monitor the air quality and water content of the soil. The raw data are transmitted to the
Enterprise Architecture as Information Technology Strategy
Many organizations adopt cyclical processes to articulate and engineer technological responses to their business needs. Their objective is to increase competitive advantage and add value to the organization's processes, services and deliverables, in line with the organization's vision and strategy. The major challenges in achieving these objectives include the rapid changes in the business and technology environments themselves, such as changes to business processes, organizational structure, architectural requirements, technology infrastructure and information needs. No activity or process is permanent in the organization. To achieve their objectives, some organizations have adopted an Enterprise Architecture (EA) approach, others an Information Technology (IT) strategy approach, and yet others have adopted both EA and IT strategy for the same primary objectives. The deployment of EA and IT strategy for the same aims and objectives raises question whether there is conflict in adopting both approaches. The paper and case study presented here, aimed at both academics and practitioners, examines how EA could be employed as IT strategy to address both business and IT needs and challenges.
Group swimming and aquatic exercise programme for children with autism spectrum disorders: a pilot study.
OBJECTIVE To evaluate the effectiveness of a 14-week aquatic exercise programme for children with autism spectrum disorders (ASD). DESIGN Non-randomized control trial. METHODS Twelve children participated in this pilot study with seven participants in the aquatic exercise group and five in the control group. The programme was held twice per week for 40 minutes per session. Swimming skills, cardiorespiratory endurance, muscular endurance, mobility skills and participant and parent satisfaction were measured before and after the intervention. RESULTS No significant between-group changes were found. Within-group improvements for swimming skills were found for the intervention group. Programme attendance was high. Parents and children were very satisfied with the programme activities and instructors. CONCLUSIONS This pilot programme was feasible and showed potential for improving swimming ability in children with ASD. Exercise intensity was low for some participants, most likely contributing to a lack of significant findings on fitness outcomes.
Why people hate your app: making sense of user feedback in a mobile app store
User review is a crucial component of open mobile app markets such as the Google Play Store. How do we automatically summarize millions of user reviews and make sense out of them? Unfortunately, beyond simple summaries such as histograms of user ratings, there are few analytic tools that can provide insights into user reviews. In this paper, we propose Wiscom, a system that can analyze tens of millions user ratings and comments in mobile app markets at three different levels of detail. Our system is able to (a) discover inconsistencies in reviews; (b) identify reasons why users like or dislike a given app, and provide an interactive, zoomable view of how users' reviews evolve over time; and (c) provide valuable insights into the entire app market, identifying users' major concerns and preferences of different types of apps. Results using our techniques are reported on a 32GB dataset consisting of over 13 million user reviews of 171,493 Android apps in the Google Play Store. We discuss how the techniques presented herein can be deployed to help a mobile app market operator such as Google as well as individual app developers and end-users.
Using problem-based learning in a large classroom.
Although PBL (problem-based learning) has gained increasing acceptance as an alternative to teacher-centered methods in nursing education, there are challenges to implementing this method in conventional course-based curriculums due to lack of additional faculty tutors to facilitate and monitor small group process. Little is known in nursing education regarding the effectiveness of teaching PBL in large group settings. [Woods, D. 1996. Problem-based Learning for Large Classes in Chemical Engineering. In: Wilkerson, L., Gijsaers, W. (Eds.), Bringing Problem-based Learning to Higher Education: Theory And Practice. Jossey-Bass, San Francisco, pp. 91-99] suggests that there are significant challenges related to student acceptance of the method, monitoring small group process and evaluating the quality of students' work. This paper will provide a description of the process and outcome of using PBL in a second year Baccalaureate nursing course using both classroom and on-line learning technology. Findings from a student survey will be included to highlight the strengths and challenges of using PBL in a large group setting with one faculty tutor. Implications for using PBL in this format will be provided.
Presentation Attack Detection for Face Recognition Using Light Field Camera
The vulnerability of face recognition systems is a growing concern that has drawn the interest from both academic and research communities. Despite the availability of a broad range of face presentation attack detection (PAD) (or countermeasure or antispoofing) schemes, there exists no superior PAD technique due to evolution of sophisticated presentation attacks (or spoof attacks). In this paper, we present a new perspective for face presentation attack detection by introducing light field camera (LFC). Since the use of a LFC can record the direction of each incoming ray in addition to the intensity, it exhibits an unique characteristic of rendering multiple depth (or focus) images in a single capture. Thus, we present a novel approach that involves exploring the variation of the focus between multiple depth (or focus) images rendered by the LFC that in turn can be used to reveal the presentation attacks. To this extent, we first collect a new face artefact database using LFC that comprises of 80 subjects. Face artefacts are generated by simulating two widely used attacks, such as photo print and electronic screen attack. Extensive experiments carried out on the light field face artefact database have revealed the outstanding performance of the proposed PAD scheme when benchmarked with various well established state-of-the-art schemes.
Boosting Structured Prediction for Imitation Learning
The Maximum Margin Planning (MMP) (Ratliff et al., 2006) algorithm solves imitation learning problems by learning linear mappings from features to cost functions in a planning domain. The learned policy is the result of minimum-cost planning using these cost functions. These mappings are chosen so that example policies (or trajectories) given by a teacher appear to be lower cost (with a lossscaled margin) than any other policy for a given planning domain. We provide a novel approach, MMPBOOST , based on the functional gradient descent view of boosting (Mason et al., 1999; Friedman, 1999a) that extends MMP by “boosting” in new features. This approach uses simple binary classification or regression to improve performance of MMP imitation learning, and naturally extends to the class of structured maximum margin prediction problems. (Taskar et al., 2005) Our technique is applied to navigation and planning problems for outdoor mobile robots and robotic legged locomotion.
Early Adolescents ' Experiences of Mental Health : A Mixed-Methods Investigation
.............................................................................................................................. ii Acknowledgments .............................................................................................................. iii Table of Contents ............................................................................................................... iv List of Tables .................................................................................................................... vii List of Appendices ........................................................................................................... viii Early Adolescents’ Perspectives of Mental Health: A Mixed-Methods Investigation ....... 1 1.1 Introduction ............................................................................................................. 1 1.2 Early Adolescence .................................................................................................. 2 1.3 Belongingness ......................................................................................................... 3 1.4 Early Adolescence, Mental Health, and Wellbeing ................................................ 5 1.5 Depression, Anxiety, and Gender Differences ....................................................... 6 1.6 Coping ..................................................................................................................... 9 1.7 Children’s Perceptions of Mental Health .............................................................. 11 1.8 Prevention and Early Intervention ........................................................................ 14 1.9 The Significance of Incorporating Youth Voices ................................................. 15 1.10 Aims of the Current Study .................................................................................... 16 1.11 Hypotheses and Research Questions .................................................................... 16 Method .............................................................................................................................. 18 2 Mixed-Methods Research Design ................................................................................ 18 2.1 Study 1 .................................................................................................................. 18 2.1.1 Participants ................................................................................................ 18 2.1.2 Measure ..................................................................................................... 19 2.1.3 Procedure .................................................................................................. 19 2.2 Study 2 .................................................................................................................. 19
Fast Object Class Labelling via Speech
Object class labelling is the task of annotating images with labels on the presence or absence of objects from a given class vocabulary. Simply asking one yes-no question per class, however, has a cost that is linear in the vocabulary size and is thus inefficient for large vocabularies. Modern approaches rely on a hierarchical organization of the vocabulary to reduce annotation time, but remain expensive (several minutes per image for the 200 classes in ILSVRC). Instead, we propose a new interface where classes are annotated via speech. Speaking is fast and allows for direct access to the class name, without searching through a list or hierarchy. As additional advantages, annotators can simultaneously speak and scan the image for objects, the interface can be kept extremely simple, and using it requires less mouse movement. However, a key challenge is to train annotators to only say words from the given class vocabulary. We present a way to tackle this challenge and show that our method yields high-quality annotations at significant speed gains (2.3− 14.9× faster than existing methods).
StructVAE: Tree-structured Latent Variable Models for Semi-supervised Semantic Parsing
Semantic parsing is the task of transducing natural language (NL) utterances into formal meaning representations (MRs), commonly represented as tree structures. Annotating NL utterances with their corresponding MRs is expensive and timeconsuming, and thus the limited availability of labeled data often becomes the bottleneck of data-driven, supervised models. We introduce STRUCTVAE, a variational auto-encoding model for semisupervised semantic parsing, which learns both from limited amounts of parallel data, and readily-available unlabeled NL utterances. STRUCTVAE models latent MRs not observed in the unlabeled data as treestructured latent variables. Experiments on semantic parsing on the ATIS domain and Python code generation show that with extra unlabeled data, STRUCTVAE outperforms strong supervised models.1
Vehicle detection for traffic flow analysis
TYAs paper looks at some of the algorithms that can be used for effective detection and tracking of vehicles, in particular for statistical analysis. The main methods for tracking discussed and implemented are blob analysis, optical flow and foreground detection. A further analysis is also done testing two of the techniques using a number of video sequences that include different levels of difficulties.
Pollution Monitoring System using Wireless Sensor Network in Visakhapatnam
As the technology increase, the degree of automation (minimizing the man power) in the almost all sectors are also increases. Wireless Sensor Networks (WSN) are gaining the ground in all sectors of life; from homes to factories, from traffic control to environmental monitoring. The air pollution monitoring system contains sensors to monitor the interested pollution parameter in environment. We simulated the three air pollutants gases including carbon monoxide, carbon dioxide & sulphur dioxide in air because these gases decides the degree of pollution level. We can also apply the approach in various applications like leaking cooking gas in our homes, to alert the workers in oil & gas industry to detect the leakage etc. This simulation creates the awareness in people in cities. Keywords— Zigbee , Xbee ,WSN, Pollution Node ,ADC I. LITERATURE SURVEY Due to recent technological advances, the construction material for small and low cost sensors became technically and economically feasible. Even though, Industrialization increase the degree of automation and at the same time it increases the air pollution by releasing the unwanted gases in environment especially in industrial areas like visakhapatnam. Inorder to implement the project, we selected four areas to deploy the pplication in visakhapatnam. To detect percentage of pollution ,we used the array of sensor to measure gas quantity in the physical environment in surrounding the sensor and convert them into an electrical signals for processing. Such a signal reveals some properties about interested gas molecule. A huge number of these sensors nodes can be networked in many applications that require unattended operations creates a wireless sensor network. Wireless sensors are devices that range in size from a piece of glitter to a deck of cards. Integration of various components create the air pollution monitoring system.. They are functionally composed of: A Sensing unit that is designed and programmed to sense gas pollutants in air in four busy areas in visakhapatnam. Some common examples of properties or parameters that are monitored are light, temperature, humidity, pressure, etc. a converter that transforms the sensed signal from an analog to a digital signal; A Processing Unit in the Microcontroller, process the signals sensed form sensor with help of embedded memory , operating system and associated circuitary. A Radio component that can communicate the sink node or zigbee router which collects the sensed pollution gas level from sensor node and forwards to pollution server which is in our campus. Powering these components is typically one or two small batteries. There are also wireless sensors utilized in applications that use a fixed value, wired power source and do not use batteries as a power source. In an external environment where the power source is batteries, wireless sensors are placed in an area of interest that is to be monitored, either in a random or known fashion. The sensors self-organize themselves in a radio network using a routing algorithm, monitor the area for measure the gas levels in air , and transmit the data to a central node, sometimes called a pollution server or base station (interfaced with coordinator), or sink node(interfaced with router), that collects the data from all of the sensors. This node may be the same as the other detection nodes, or because of its increased requirements, may be a more sophisticated sensor node with increased power. The most advantage of wireless sensors is that they may be implemented in an environment for extended over a time period, continuously detecting the environment, without the need for human interaction or operation.
DP-ADMM: ADMM-based Distributed Learning with Differential Privacy
Privacy-preserving distributed machine learning has become more important than ever due to the high demand of large-scale data processing. This paper focuses on a class of machine learning problems that can be formulated as regularized empirical risk minimization, and develops a privacy-preserving learning approach to such problems. We use Alternating Direction Method of Multipliers (ADMM) to decentralize the learning algorithm, and apply Gaussian mechanisms to provide differential privacy guarantee. However, simply combining ADMM and local randomization mechanisms would result in a nonconvergent algorithm with poor performance even under moderate privacy guarantees. Besides, this intuitive approach requires a strong assumption that the objective functions of the learning problems should be differentiable and strongly convex. To address these concerns, we propose an improved ADMMbased Differentially Private distributed learning algorithm, DPADMM, where an approximate augmented Lagrangian function and Gaussian mechanisms with time-varying variance are utilized. We also apply the moments accountant method to bound the total privacy loss. Our theoretical analysis shows that DPADMM can be applied to a general class of convex learning problems, provides differential privacy guarantee, and achieves a convergence rate of O(1/ √ t), where t is the number of iterations. Our evaluations demonstrate that our approach can achieve good convergence and accuracy with moderate privacy guarantee.
A 10 GHz low-power multi-modulus frequency divider using Extended True Single-Phase Clock (E-TSPC) Logic
This paper presents a multi-modulus frequency divider (MMD) based on the Extended True Single-Phase Clock (E-TSPC) Logic. The MMD consists of four cascaded divide-by-2/3 E-TSPC cells. The basic functionality of the MMD and the E-TSPC 2/3 divider are explained. The whole design was implemented in an [0.13] m CMOS process from IBM. Simulation and measurement results of the MMD are shown. Measurement results indicates a maximum operating frequency of [10] GHz and a power consumption of [4] mW for each stage. These results are compared to other state of the art dual modulus E-TSPC dividers, showing the good position of this design relating to operating frequency and power consumption.
Assessing elementary students' science competency with text analytics
Real-time formative assessment of student learning has become the subject of increasing attention. Students' textual responses to short answer questions offer a rich source of data for formative assessment. However, automatically analyzing textual constructed responses poses significant computational challenges, and the difficulty of generating accurate assessments is exacerbated by the disfluencies that occur prominently in elementary students' writing. With robust text analytics, there is the potential to accurately analyze students' text responses and predict students' future success. In this paper, we present WriteEval, a hybrid text analytics method for analyzing student-composed text written in response to constructed response questions. Based on a model integrating a text similarity technique with a semantic analysis technique, WriteEval performs well on responses written by fourth graders in response to short-text science questions. Further, it was found that WriteEval's assessments correlate with summative analyses of student performance.
Learning to predict slip for ground robots
In this paper we predict the amount of slip an exploration rover would experience using stereo imagery by learning from previous examples of traversing similar terrain. To do that, the information of terrain appearance and geometry regarding some location is correlated to the slip measured by the rover while this location is being traversed. This relationship is learned from previous experience, so slip can be predicted later at a distance from visual information only. The advantages of the approach are: 1) learning from examples allows the system to adapt to unknown terrains rather than using fixed heuristics or predefined rules; 2) the feedback about the observed slip is received from the vehicle's own sensors which can fully automate the process; 3) learning slip from previous experience can replace complex mechanical modeling of vehicle or terrain, which is time consuming and not necessarily feasible. Predicting slip is motivated by the need to assess the risk of getting trapped before entering a particular terrain. For example, a planning algorithm can utilize slip information by taking into consideration that a slippery terrain is costly or hazardous to traverse. A generic nonlinear regression framework is proposed in which the terrain type is determined from appearance and then a nonlinear model of slip is learned for a particular terrain type. In this paper we focus only on the latter problem and provide slip learning and prediction results for terrain types, such as soil, sand, gravel, and asphalt. The slip prediction error achieved is about 15% which is comparable to the measurement errors for slip itself
On the asymptotic proportion of connected matroids
Very little is known about the asymptotic behavior of classes of matroids. Wemake a number of conjectures about such behaviors. For example, we conjecture that asymptotically almost every matroid: has a trivial automorphism group; is arbitrarily highly connected; and is not representable over any field. We prove one result: the proportion of labeled n-element matroids that are connected is asymptotically at least 1/2. © 2011 Elsevier Ltd. All rights reserved.
A Smart-Dumb/Dumb-Smart Algorithm for Efficient Split-Merge MCMC
Split-merge moves are a standard component of MCMC algorithms for tasks such as multitarget tracking and fitting mixture models with unknown numbers of components. Achieving rapid mixing for split-merge MCMC has been notoriously difficult, and state-of-the-art methods do not scale well. We explore the reasons for this and propose a new split-merge kernel consisting of two sub-kernels: one combines a “smart” split move that proposes plausible splits of heterogeneous clusters with a “dumb” merge move that proposes merging random pairs of clusters; the other combines a dumb split move with a smart merge move. We show that the resulting smart-dumb/dumb-smart (SDDS) algorithm outperforms previous methods. Experiments with entity-mention models and Dirichlet process mixture models demonstrate much faster convergence and better scaling to large data sets.
Genioglossus activity available via non-arousal mechanisms vs. that required for opening the airway in obstructive apnea patients.
It is generally believed that reflex recruitment of pharyngeal dilator muscles is insufficient to open the airway of obstructive apnea (OSA) patients once it is closed and, therefore, that arousal is required. Yet arousal promotes recurrence of obstruction. There is no information about how much dilator [genioglossus (GG)] activation is required to open the airway (GG Opening Threshold) or about the capacity of reflex mechanisms to increase dilator activity before/without arousal (Non-Arousal Activation). The relationship between these two variables is important for ventilatory stability. We measured both variables in 32 OSA patients (apnea-hypopnea index 74 ± 42 events/h). GG activity was monitored while patients were on optimal continuous positive airway pressure (CPAP). Zopiclone was administered to delay arousal. Maximum GG activity (GG(MAX)) and airway closing pressure (P(CRIT)) were measured. During stable sleep CPAP was decreased to 1 cmH(2)O to induce obstructive events and the dial-downs were maintained until the airway opened with or without arousal. GG activity at the instant of opening (GG Opening Threshold) was measured. GG Opening Threshold averaged only 10.4 ± 9.5% GG(Max) and did not correlate with P(CRIT) (r = 0.04). Twenty-six patients had >3 openings without arousal, indicating that Non-Arousal Activation can exceed GG Opening Threshold in the majority of patients. GG activity reached before arousal in Arousal-Associated Openings was only 5.4 ± 4.6% GG(MAX) below GG Opening Threshold. We conclude that in most patients GG activity required to open the airway is modest and can be reached by non-arousal mechanisms. Arousals occur in most cases just before non-arousal mechanisms manage to increase activity above GG Opening Threshold. Measures to reduce GG Opening Threshold even slightly may help stabilize breathing in many patients.
Improved single image dehazing using dark channel prior
Atmospheric conditions induced by suspended particles, such as fog and haze, severely degrade image quality. Haze removal from a single image of a weather-degraded scene remains a challenging task, because the haze is dependent on the unknown depth information. In this paper, we introduce an improved single image dehazing algorithm which based on the atmospheric scattering physics-based models. We apply the local dark channel prior on selected region to estimate the atmospheric light, and obtain more accurate result. Experiments on real images validate our approach.
An automotive on-board 3.3 kW battery charger for PHEV application
An on-board charger is responsible for charging the battery pack in a plug-in hybrid electric vehicle (PHEV). In this paper, a 3.3kW two stage battery charger design is presented for a PHEV application. The objective of the design is to achieve high efficiency, which is critical to minimize the charger size, charging time and the amount and cost of electricity drawn from the utility. The operation of the charger power converter configuration is provided in addition to a detailed design procedure. The mechanical packaging design and key experimental results are provided to verify the suitability of the proposed charger power architecture.
A Survey on Application Layer Protocols for the Internet of Things
It has been more than fifteen years since the term Internet of Things (IoT) was introduced to the public. However, despite the efforts of research groups and innovative corporations, still today it is not possible to say that IoT is upon us. This is mainly due to the fact that a unified IoT architecture has not been yet clearly defined and there is no common agreement in defining protocols and standards for all IoT parts. The framework that current IoT platforms use consists mostly in technologies that partially fulfill some of the IoT requirements. While developers employ existing technologies to build the IoT, research groups are working on adapting protocols to the IoT in order to optimize communications. In this paper, we present and compare existing IoT application layer protocols as well as protocols that are utilized to connect the things but also end-user applications to the Internet. We highlight IETFs CoAP, IBMs MQTT, HTML 5s Web socket among others, and we argue their suitability for the IoT by considering reliability, security, and energy consumption aspects. Finally, we provide our conclusions for the IoT application layer communications based on the study that we have conducted.
Self-Paced Learning with Adaptive Deep Visual Embeddings
Selecting the most appropriate data examples to present a deep neural network (DNN) at different stages of training is an unsolved challenge. Though practitioners typically ignore this problem, a non-trivial data scheduling method may result in a significant improvement in both convergence and generalization performance. In this paper, we introduce Self-Paced Learning with Adaptive Deep Visual Embeddings (SPL-ADVisE), a novel end-to-end training protocol that unites self-paced learning (SPL) and deep metric learning (DML). We leverage the Magnet Loss to train an embedding convolutional neural network (CNN) to learn a salient representation space. The student CNN classifier dynamically selects similar instance-level training examples to form a mini-batch, where the easiness from the cross-entropy loss and the true diverseness of examples from the learned metric space serve as sample importance priors. To demonstrate the effectiveness of SPL-ADVisE, we use deep CNN architectures for the task of supervised image classification on several coarseand fine-grained visual recognition datasets. Results show that, across all datasets, the proposed method converges faster and reaches a higher final accuracy than other SPL variants, particularly on fine-grained classes.
MultiVis: Content-Based Social Network Exploration through Multi-way Visual Analysis
With the explosion of social media, scalability becomes a key challenge. There are two main aspects of the problems that arise: 1) data volume: how to manage and analyze huge datasets to efficiently extract patterns, 2) data understanding: how to facilitate understanding of the patterns by users? To address both aspects of the scalability challenge, we present a hybrid approach that leverages two complementary disciplines, data mining and information visualization. In particular, we propose 1) an analytic data model for content-based networks using tensors; 2) an efficient high-order clustering framework for analyzing the data; 3) a scalable context-sensitive graph visualization to present the clusters. We evaluate the proposed methods using both synthetic and real datasets. In terms of computational efficiency, the proposed methods are an order of magnitude faster compared to the baseline. In terms of effectiveness, we present several case studies of real corporate social networks.
Acute effect of nebulized budesonide in asthmatic children.
The acute anti-inflammatory effects of inhaled steroids at high doses and their use at home and as emergency treatment of acute asthma attacks in children have been evaluated in many clinical studies. However very little is known about their additional bronchodilator response to systemic steroids plus nebulized salbutamol in the early management in children. Asthmatic patients aged between 5-15 years were investigated in a double-blind, placebo-controlled fashion. Both the study group (Group I) and the control group (Group II) received three consecutive doses of nebulized salbutamol (0.15 mg/kg/dose) and one dose of parenteral methylprednisolone (1 mg/kg/dose, intramuscularly). After this treatment, nebulized budesonide (1 mg/dose) was administered to patients in the study group and placebo (nebulized saline) was administered to patients in the control group. Pulmonary index scoring and peak flow meter was performed to both groups before and after the treatment. There were twelve patients in Group I (mean age: 7.90 +/- 2.34 years) and fourteen patients in Group II (mean age: 9.36 +/- 2.55 years). There was no difference between the two groups with respect to age (p = 0.1421), gender (p = 1.000) and inhaled steroid prophylaxis rate (p = 0.2177). No statistically significant difference was detected between the two groups with respect to the pulmonary index score (p = 0.3528). Yet, there was a statistically significant difference between the two groups with respect to the increase in PEFR (p = 0.0155). The positive acute effect of nebulized budesonide in addition to systemic steroids and nebulized salbutamol in improving the spirometric indices in asthmatic children is an encouraging finding for further investigations of its routine use in the pediatric emergency department.
Cachexia: a new definition.
On December 13th and 14th a group of scientists and clinicians met in Washington, DC, for the cachexia consensus conference. At the present time, there is no widely agreed upon operational definition of cachexia. The lack of a definition accepted by clinician and researchers has limited identification and treatment of cachectic patient as well as the development and approval of potential therapeutic agents. The definition that emerged is: "cachexia, is a complex metabolic syndrome associated with underlying illness and characterized by loss of muscle with or without loss of fat mass. The prominent clinical feature of cachexia is weight loss in adults (corrected for fluid retention) or growth failure in children (excluding endocrine disorders). Anorexia, inflammation, insulin resistance and increased muscle protein breakdown are frequently associated with cachexia. Cachexia is distinct from starvation, age-related loss of muscle mass, primary depression, malabsorption and hyperthyroidism and is associated with increased morbidity. While this definition has not been tested in epidemiological or intervention studies, a consensus operational definition provides an opportunity for increased research.
Effect of urocortin 1 infusion in humans with stable congestive cardiac failure.
In sheep with HF (heart failure), Ucn 1 (urocortin 1) decreases total peripheral resistance and left atrial pressure, and increases cardiac output in association with attenuation of vasopressor hormone systems and enhancement of renal function. In a previous study, we demonstrated in the first human studies that infusion of Ucn 1 elevates corticotropin ('ACTH'), cortisol and ANP (atrial natriuretic peptide), and suppresses the hunger-inducing hormone ghrelin in normal subjects. In the present study, we examined the effects of Ucn 1 on pituitary, adrenal and cardiovascular systems in the first Ucn 1 infusion study in human HF. In human HF, it is proposed that Ucn 1 would augment corticotropin and cortisol release, suppress ghrelin and reproduce the cardiorenal effects seen in animals with HF. On day 3 of a controlled metabolic diet, we studied eight male volunteers with stable HF (ejection fraction <40%; New York Heart Association Class II-III) on two occasions, 2 weeks apart, receiving 50 microg of Ucn 1 or placebo intravenously over 1 h in a randomized time-matched cross-over design. Neurohormones, haemodynamics and urine indices were recorded. Ucn 1 infusion increased plasma Ucn 1, corticotropin (baseline, 5.9+/-0.9 pmol/l; and peak, 7.2+/-1.0 pmol/l) and cortisol (baseline, 285+/-42 pmol/l; and peak, 310+/-41 pmol/l) compared with controls (P<0.001, 0.008 and 0.047 respectively). The plasma Ucn 1 half-life was 54+/-3 min. ANP and ghrelin were unchanged, and no haemodynamic or renal effects were seen. In conclusion, a brief intravenous infusion of 50 microg of Ucn 1 stimulates corticotropin and cortisol in male volunteers with stable HF.
Structure Discovery in Nonparametric Regression through Compositional Kernel Search
Despite its importance, choosing the structural form of the kernel in nonparametric regression remains a black art. We define a space of kernel structures which are built compositionally by adding and multiplying a small number of base kernels. We present a method for searching over this space of structures which mirrors the scientific discovery process. The learned structures can often decompose functions into interpretable components and enable long-range extrapolation on time-series datasets. Our structure search method outperforms many widely used kernels and kernel combination methods on a variety of prediction tasks.
Innovative Pedagogical Approaches to a Capstone Laboratory Course in Cyber Operations
This paper provides pedagogical lessons drawn from a capstone hands-on laboratory course in cyber operations. It is taught as a flipped class, where the center piece is a collection of exercises that require teams of students to set up, defend, and attack complex networks. Project designs are presented, including balancing offense and defense to improve course learning outcomes. Lessons on the recruiting and managing of an external "red team" are provided. Grading issues are addressed, as are techniques to manage students of different skills and motivations.
A current-starved inverter-based differential amplifier design for ultra-low power applications
As silicon feature sizes decrease, more complex circui try arrays can now be contrived on a single die. This increase in the number of on-chip devices per unit area results in increased power dissipation per unit area. In order to meet certain power and operating temperature specifications, circuit design necessitates a focus on power efficiency, which is especially important in systems employing hundreds or thousands of instances of the same device. In large arrays, a slight increase in the power efficiency of a single component is heightened by the number of instances of the device in the system. This paper proposes a fully differential, low-power current-starving inverter-based amplifier topology designed in a commercial 0.18μm process. This design achieves 46dB DC gain and a 464 kHz uni ty gain frequency with a power consumption of only 145.32nW at 700mV power supply vol tage for ultra-low power, low bandwidth applications. Higher bandwidth designs are also proposed, including a 48dB DC gain, 2.4 MHz unity-gain frequency amplifier operating at 900mV wi th only 3.74μW power consumption.
HappyDB: A Corpus of 100, 000 Crowdsourced Happy Moments
The science of happiness is an area of positive psychology concerned with understanding what behaviors make people happy in a sustainable fashion. Recently, there has been interest in developing technologies that help incorporate the findings of the science of happiness into users’ daily lives by steering them towards behaviors that increase happiness. With the goal of building technology that can understand how people express their happy moments in text, we crowd-sourced HappyDB, a corpus of 100,000 happy moments that we make publicly available. This paper describes HappyDB and its properties, and outlines several important NLP problems that can be studied with the help of the corpus. We also apply several state-of-the-art analysis techniques to analyze HappyDB. Our results demonstrate the need for deeper NLP techniques to be developed which makes HappyDB an exciting resource for follow-on research.
A One-Layer Recurrent Neural Network for Constrained Nonsmooth Optimization
This paper presents a novel one-layer recurrent neural network modeled by means of a differential inclusion for solving nonsmooth optimization problems, in which the number of neurons in the proposed neural network is the same as the number of decision variables of optimization problems. Compared with existing neural networks for nonsmooth optimization problems, the global convexity condition on the objective functions and constraints is relaxed, which allows the objective functions and constraints to be nonconvex. It is proven that the state variables of the proposed neural network are convergent to optimal solutions if a single design parameter in the model is larger than a derived lower bound. Numerical examples with simulation results substantiate the effectiveness and illustrate the characteristics of the proposed neural network.
A controlled trial of implementing a complex mental health intervention for carers of vulnerable young people living in out-of-home care: the ripple project
BACKGROUND Out-of-home care (OoHC) refers to young people removed from their families by the state because of abuse, neglect or other adversities. Many of the young people experience poor mental health and social function before, during and after leaving care. Rigorously evaluated interventions are urgently required. This publication describes the protocol for the Ripple project and notes early findings from a controlled trial demonstrating the feasibility of the work. The Ripple project is implementing and evaluating a complex mental health intervention that aims to strengthen the therapeutic capacities of carers and case managers of young people (12-17 years) in OoHC. METHODS The study is conducted in partnership with mental health, substance abuse and social services in Melbourne, with young people as participants. It has three parts: 1. Needs assessment and implementation of a complex mental health intervention; 2. A 3-year controlled trial of the mental health, social and economic outcomes; and 3. Nested process evaluation of the intervention. RESULTS Early findings characterising the young people, their carers and case managers and implementing the intervention are available. The trial Wave 1 includes interviews with 176 young people, 52% of those eligible in the study population, 104 carers and 79 case managers. CONCLUSIONS Implementing and researching an affordable service system intervention appears feasible and likely to be applicable in other places and countries. Success of the intervention will potentially contribute to reducing mental ill-health among these young people, including suicide attempts, self-harm and substance abuse, as well as reducing homelessness, social isolation and contact with the criminal justice system. TRIAL REGISTRATION Australian New Zealand Clinical Trials Registry ACTRN12615000501549 . Retrospectively registered 19 May 2015.
A Century of Science: Globalization of Scientific Collaborations, Citations, and Innovations
Progress in science has advanced the development of human society across history, with dramatic revolutions shaped by information theory, genetic cloning, and artificial intelligence, among the many scientific achievements produced in the 20th century. However, the way that science advances itself is much less well-understood. In this work, we study the evolution of scientific development over the past century by presenting an anatomy of 89 million digitalized papers published between 1900 and 2015. We find that science has benefited from the shift from individual work to collaborative effort, with over 90% of the world-leading innovations generated by collaborations in this century, nearly four times higher than they were in the 1900s. We discover that rather than the frequent myopic- and self-referencing that was common in the early 20th century, modern scientists instead tend to look for literature further back and farther around. Finally, we also observe the globalization of scientific development from 1900 to 2015, including 25-fold and 7-fold increases in international collaborations and citations, respectively, as well as a dramatic decline in the dominant accumulation of citations by the US, the UK, and Germany, from ~95% to ~50% over the same period. Our discoveries are meant to serve as a starter for exploring the visionary ways in which science has developed throughout the past century, generating insight into and an impact upon the current scientific innovations and funding policies.
A Review of Internet Pornography Use Research: Methodology and Content from the Past 10 Years
Internet pornography (IP) use has increased over the past 10 years. The effects of IP use are widespread and are both negative (e.g., relationship and interpersonal distress) and positive (e.g., increases in sexual knowledge and attitudes toward sex). Given the possible negative effects of IP use, understanding the definition of IP, the types of IP used, and reasons for IP use is important. The present study reviews the methodology and content of available literature regarding IP use in nondeviant adult populations. The study seeks to determine how the studies defined IP, utilized validated measures of pornography use, examined variables related to IP, and addressed form and function of IP use. Overall, studies were inconsistent in their definitions of IP, measurement, and their assessment of the form and function of IP use. Discussion regarding how methodological differences between studies may impact the results and the ability to generalize findings is provided, and suggestions for future studies are offered.
Adaptive Distance Estimation and Localization in WSN using RSSI Measures
Localization is one of the most challenging and important issues in wireless sensor networks (WSNs), especially if cost-effective approaches are demanded. In this paper, we present intensively discuss and analyze approaches relying on the received signal strength indicator (RSSI). The advantage of employing the RSSI values is that no extra hardware (e.g. ultrasonic or infra-red) is needed for network-centric localization. We studied different factors that affect the measured RSSI values. Finally, we evaluate two methods to estimate the distance; the first approach is based on statistical methods. For the second one, we use an artificial neural network to estimate the distance.
The CMU METAL Farsi NLP Approach
While many high-quality tools are available for analyzing major languages such as English, equivalent freely-available tools for important but lower-resourced languages such as Farsi are more difficult to acquire and integrate into a useful NLP front end. We report here on an accurate and efficient Farsi analysis front end that we have assembled, which may be useful to others who wish to work with written Farsi. The pre-existing components and resources that we incorporated include the Carnegie Mellon TurboParser and TurboTagger (Martins et al., 2010) trained on the Dadegan Treebank (Rasooli et al., 2013), the Uppsala Farsi text normalizer PrePer (Seraji, 2013), the Uppsala Farsi tokenizer (Seraji et al., 2012a), and Jon Dehdari’s PerStem (Jadidinejad et al., 2010). This set of tools (combined with additional normalization and tokenization modules that we have developed and made available) achieves a dependency parsing labeled attachment score of 89.49%, unlabeled attachment score of 92.19%, and label accuracy score of 91.38% on a held-out parsing test data set. All of the components and resources used are freely available. In addition to describing the components and resources, we also explain the rationale for our choices.
Effect of Patient Education on Reducing Medication in Spinal Cord Injury Patients With Neuropathic Pain
OBJECTIVE To determine whether providing education about the disease pathophysiology and drug mechanisms and side effects, would be effective for reducing the use of pain medication while appropriately managing neurogenic pain in spinal cord injury (SCI) patients. METHODS In this prospective study, 109 patients with an SCI and neuropathic pain, participated in an educational pain management program. This comprehensive program was specifically created, for patients with an SCI and neuropathic pain. It consisted of 6 sessions, including educational training, over a 6-week period. RESULTS Of 109 patients, 79 (72.5%) initially took more than two types of pain medication, and this decreased to 36 (33.0%) after the educational pain management program was completed. The mean pain scale score and the number of pain medications decreased, compared to the baseline values. Compared to the non-response group, the response group had a shorter duration of pain onset (p=0.004), and a higher initial number of different medications (p<0.001) and certain types of medications. CONCLUSION This study results imply that an educational pain management program, can be a valuable complement to the treatment of spinal cord injured patients with neuropathic pain. Early intervention is important, to prevent patients from developing chronic SCI-related pain.
Lifetime improvement of NAND flash-based storage systems using dynamic program and erase scaling
The cost-per-bit of NAND flash memory has been continuously improved by semiconductor process scaling and multi-leveling technologies (e.g., a 10 nm-node TLC device). However, the decreasing lifetime of NAND flash memory as a side effect of recent advanced technologies is regarded as a main barrier for a wide adoption of NAND flash-based storage systems. In this paper, we propose a new system-level approach, called dynamic program and erase scaling (DPES), for improving the lifetime (particularly, endurance) of NAND flash memory. The DPES approach is based on our key observation that changing the erase voltage as well as the erase time significantly affects the NAND endurance. By slowly erasing a NAND block with a lower erase voltage, we can improve the NAND endurance very effectively. By modifying NAND chips to support multiple write and erase modes with different operation voltages and times, DPES enables a flash software to exploit the new tradeoff relationships between the NAND endurance and erase voltage/speed under dynamic program and erase scaling. We have implemented the first DPES-aware FTL, called autoFTL, which improves the NAND endurance with a negligible degradation in the overall write throughput. Our experimental results using various I/O traces show that autoFTL can improve the maximum number of P/E cycles by 61.2% over an existing DPES-unaware FTL with less than 2.2% decrease in the overall write throughput.
Stability analysis of cascaded converters for bidirectional power flow applications
This paper establishes the criteria to ensure stable operation of two-stage, bidirectional, isolated AC-DC converters. The bi-directional converter is analyzed in the context of a building block module (BBM) that enables a fully modular architecture for universal power flow conversion applications (AC-DC, DC-AC and DC-DC). The BBM consists of independently controlled AC-DC and isolated DC-DC converters that are cascaded for bidirectional power flow applications. The cascaded converters have different control objectives in different directions of power flow. This paper discusses methods to obtain the appropriate input and output impedances that determine stability in the context of bi-directional AC-DC power conversion. Design procedures to ensure stable operation with minimal interaction between the cascaded stages are presented. The analysis and design methods are validated through extensive simulation and hardware results.
Diagnosis, management and treatment of orbital and periorbital cellulitis in children.
Children with red swollen eyes frequently present to emergency departments. Some patients will have orbital cellulitis, a condition that requires immediate diagnosis and treatment. Orbital cellulitis can be confused with the less severe, but more frequently encountered, periorbital cellulitis, which requires less aggressive management. Delayed recognition of the signs and symptoms of orbital cellulitis can lead to serious complications such as blindness, meningitis and cerebral abscess. This article describes the clinical features, epidemiology and outcomes of the condition, and discusses management and treatment. It also includes a case study.
Use of Colon Cancer Testing in Rural Colorado Primary Care Practices
People living in rural areas may be less likely to be up to date (UTD) with screening guidelines for colorectal cancer (CRC). To determine (1) rates of being UTD with screening or ever having had a test for CRC and (2) correlates for testing among patients living in a rural area who visit a provider. Cross-sectional survey. Five hundred seventy patients aged 50 years and older who visited their health-care provider in High Plains Research Network (HPRN) practices. (1) Ever having had a CRC screening test, (2) being UTD with CRC screening, and (3) intention to get tested. The survey completion rate was 65%; 71% of patients had ever had any CRC screening test, while 52% of patients were UTD. Correlates of intending to get tested included having a family history of CRC, having a doctor recommend a test, knowing somebody who got tested, and believing that testing for CRC gives one a feeling of being in control of their health. Of those who had never had a CRC screening test, 12% planned on getting tested in the future, while 55% of those who were already up to date intended to be tested again (p < 0.001). Prevalence of being UTD with CRC testing in the HPRN was on par with statewide CRC testing rates, but over three quarters of patients who had not yet been screened had no intention of getting tested for CRC, despite having a medical home.
Joint tracking and segmentation of multiple targets
Tracking-by-detection has proven to be the most successful strategy to address the task of tracking multiple targets in unconstrained scenarios [e.g. 40, 53, 55]. Traditionally, a set of sparse detections, generated in a preprocessing step, serves as input to a high-level tracker whose goal is to correctly associate these “dots” over time. An obvious short-coming of this approach is that most information available in image sequences is simply ignored by thresholding weak detection responses and applying non-maximum suppression. We propose a multi-target tracker that exploits low level image information and associates every (super)-pixel to a specific target or classifies it as background. As a result, we obtain a video segmentation in addition to the classical bounding-box representation in unconstrained, real-world videos. Our method shows encouraging results on many standard benchmark sequences and significantly outperforms state-of-the-art tracking-by-detection approaches in crowded scenes with long-term partial occlusions.
Using behavior models for anomaly detection in hybrid systems
The importance of safety and reliability in today's real-world complex hybrid systems, such as process plants, led to the development of various anomaly detection and diagnosis techniques. Model-based approaches established themselves among the most successful ones in the field. However, they depend on a model of a system, which usually needs to be derived manually. Manual modeling requires a lot of efforts and resources. This paper gives a procedure for anomaly detection in hybrid systems that uses automatically generated behavior models. The model is learned from logged system's measurements in a hybrid automaton framework. The presented anomaly detection algorithm utilizes the model to predict the system behavior, and to compare it with the observed behavior in an online manner. Alarms are raised whenever a discrepancy is found between these two. The effectiveness of this approach is demonstrated in detecting several types of anomalies in a real-world running production system.
Effect of B-vitamin therapy on progression of diabetic nephropathy: a randomized controlled trial.
CONTEXT Hyperhomocysteinemia is frequently observed in patients with diabetic nephropathy. B-vitamin therapy (folic acid, vitamin B(6), and vitamin B(12)) has been shown to lower the plasma concentration of homocysteine. OBJECTIVE To determine whether B-vitamin therapy can slow progression of diabetic nephropathy and prevent vascular complications. DESIGN, SETTING, AND PARTICIPANTS A multicenter, randomized, double-blind, placebo-controlled trial (Diabetic Intervention with Vitamins to Improve Nephropathy [DIVINe]) at 5 university medical centers in Canada conducted between May 2001 and July 2007 of 238 participants who had type 1 or 2 diabetes and a clinical diagnosis of diabetic nephropathy. INTERVENTION Single tablet of B vitamins containing folic acid (2.5 mg/d), vitamin B(6) (25 mg/d), and vitamin B(12) (1 mg/d), or matching placebo. MAIN OUTCOME MEASURES Change in radionuclide glomerular filtration rate (GFR) between baseline and 36 months. Secondary outcomes were dialysis and a composite of myocardial infarction, stroke, revascularization, and all-cause mortality. Plasma total homocysteine was also measured. RESULTS The mean (SD) follow-up during the trial was 31.9 (14.4) months. At 36 months, radionuclide GFR decreased by a mean (SE) of 16.5 (1.7) mL/min/1.73 m(2) in the B-vitamin group compared with 10.7 (1.7) mL/min/1.73 m(2) in the placebo group (mean difference, -5.8; 95% confidence interval [CI], -10.6 to -1.1; P = .02). There was no difference in requirement of dialysis (hazard ratio [HR], 1.1; 95% CI, 0.4-2.6; P = .88). The composite outcome occurred more often in the B-vitamin group (HR, 2.0; 95% CI, 1.0-4.0; P = .04). Plasma total homocysteine decreased by a mean (SE) of 2.2 (0.4) micromol/L at 36 months in the B-vitamin group compared with a mean (SE) increase of 2.6 (0.4) micromol/L in the placebo group (mean difference, -4.8; 95% CI, -6.1 to -3.7; P < .001, in favor of B vitamins). CONCLUSION Among patients with diabetic nephropathy, high doses of B vitamins compared with placebo resulted in a greater decrease in GFR and an increase in vascular events. TRIAL REGISTRATION isrctn.org Identifier: ISRCTN41332305.
Isolated bidirectional DC/AC and AC/DC three-phase power conversion using series resonant converter modules and a three-phase unfolder
This paper proposes a modular converter system to achieve high power density, high system bandwidth and scalability for isolated bidirectional DC/AC and AC/DC three-phase power conversion. The approach is based on high frequency isolated series resonant converter (SRC) modules with series output connection and a low frequency three-phase unfolder. The performance objectives are realized through elimination of traditional low frequency passive filters used in PWM inverters and instead require high control bandwidth in the SRC modules to achieve high quality AC waveforms. The system operation and performance are verified with simulation and experimental results for a 1 kW prototype.
Performance Analysis of the Alpha 21364-BAsed HP GS1280 Multiprocessor
This paper evaluates performance characteristics of the HP GS1280 shared memory multiprocessor system. The GS1280 system contains up to 64 Alpha 21364 CPUs connected together via a torus-based interconnect. We describe architectural features of the GS1280 system. We compare and contrast the GS1280 to the previousgeneration Alpha systems: AlphaServer GS320 and ES45/SC45. We further quantitatively show the performance effects of these features using application results and profiling data based on the built-in performance counters. We find that the HP GS1280 often provides 2 to 3 times the performance of the AlphaServer GS320 at similar clock frequencies. We find the key reasons for such performance gains are advances in memory, inter-processor, and I/O subsystem designs.
Lane Detection With Moving Vehicles in the Traffic Scenes
A lane-detection method aimed at handling moving vehicles in the traffic scenes is proposed in this brief. First, lane marks are extracted based on color information. The extraction of lane-mark colors is designed in a way that is not affected by illumination changes and the proportion of space that vehicles on the road occupy. Next, for vehicles that have the same colors as the lane marks, we utilize size, shape, and motion information to distinguish them from the real lane marks. The mechanism effectively eliminates the influence of passing vehicles when performing lane detection. Finally, pixels in the extracted lane-mark mask are accumulated to find the boundary lines of the lane. The proposed algorithm is able to robustly find the left and right boundary lines of the lane and is not affected by the passing traffic. Experimental results show that the proposed method works well on marked roads in various lighting conditions
The Reward Circuit: Linking Primate Anatomy and Human Imaging
Although cells in many brain regions respond to reward, the cortical-basal ganglia circuit is at the heart of the reward system. The key structures in this network are the anterior cingulate cortex, the orbital prefrontal cortex, the ventral striatum, the ventral pallidum, and the midbrain dopamine neurons. In addition, other structures, including the dorsal prefrontal cortex, amygdala, hippocampus, thalamus, and lateral habenular nucleus, and specific brainstem structures such as the pedunculopontine nucleus, and the raphe nucleus, are key components in regulating the reward circuit. Connectivity between these areas forms a complex neural network that mediates different aspects of reward processing. Advances in neuroimaging techniques allow better spatial and temporal resolution. These studies now demonstrate that human functional and structural imaging results map increasingly close to primate anatomy.
The neurobehavioral pharmacology of ketamine: implications for drug abuse, addiction, and psychiatric disorders.
Ketamine was developed in the early 1960s as an anesthetic and has been used for medical and veterinary procedures since then. Its unique profile of effects has led to its use at subanesthetic doses for a variety of other purposes: it is an effective analgesic and can prevent certain types of pathological pain; it produces schizophrenia-like effects and so is used in both clinical studies and preclinical animal models to better understand this disorder; it has rapid-acting and long-lasting antidepressant effects; and it is popular as a drug of abuse both among young people at dance parties and raves and among spiritual seekers. In this article we summarize recent research that provides insight into the myriad uses of ketamine. Clinical research is discussed, but the focus is on preclinical animal research, including recent findings from our own laboratory. Of particular note, although ketamine is normally considered a locomotor stimulant at subanesthetic doses, we have found locomotor depressant effects at very low subanesthetic doses. Thus, rather than a monotonic dose-dependent increase in activity, ketamine produces a more complex dose response. Additional work explores the mechanism of action of ketamine, ketamine-induced neuroadaptations, and ketamine reward. The findings described will inform future research on ketamine and lead to a better understanding of both its clinical uses and its abuse.
Decision Jungles: Compact and Rich Models for Classification
Randomized decision trees and forests have a rich history in machine learning and have seen considerable success in application, perhaps particularly so for computer vision. However, they face a fundamental limitation: given enough data, the number of nodes in decision trees will grow exponentially with depth. For certain applications, for example on mobile or embedded processors, memory is a limited resource, and so the exponential growth of trees limits their depth, and thus their potential accuracy. This paper proposes decision jungles, revisiting the idea of ensembles of rooted decision directed acyclic graphs (DAGs), and shows these to be compact and powerful discriminative models for classification. Unlike conventional decision trees that only allow one path to every node, a DAG in a decision jungle allows multiple paths from the root to each leaf. We present and compare two new node merging algorithms that jointly optimize both the features and the structure of the DAGs efficiently. During training, node splitting and node merging are driven by the minimization of exactly the same objective function, here the weighted sum of entropies at the leaves. Results on varied datasets show that, compared to decision forests and several other baselines, decision jungles require dramatically less memory while considerably improving generalization.
Toward sociable robots
This paper explores the topic of social robots—the class of robots that people anthropomorphize in order to interact with them. From the diverse and growing number of applications for such robots, a few distinct modes of interaction are beginning to emerge. We distinguish four such classes: socially evocative, social interface, socially receptive, and sociable. For the remainder of the paper, we explore a few key features of sociable robots that distinguish them from the others. We use the vocal turn-taking behavior of our robot, Kismet, as a case study to highlight these points. © 2003 Published by Elsevier Science B.V.
Time Provisioning Evaluation of KVM, Docker and Unikernels in a Cloud Platform
Unikernels are a promising alternative for application deployment in cloud platforms. They comprise a very small footprint, providing better deployment agility and portability among virtualization platforms. Similar to Linux containers, they are a lightweight alternative for deploying distributed applications based on microservices. However, the comparison of unikernels with other virtualization options regarding the concurrent provisioning of instances, as in the case of microservices-based applications, is still lacking. This paper provides an evaluation of KVM (Virtual Machines), Docker (Containers), and OSv (Unikernel), when provisioning multiple instances concurrently in an OpenStack cloud platform. We confirmed that OSv outperforms the other options and also identified opportunities for optimization.
Protecting Location Privacy in Spatial Crowdsourcing using Encrypted Data
In spatial crowdsourcing, spatial tasks are outsourced to a set of workers in proximity of the task locations for efficient assignment. It usually requires workers to disclose their locations, which inevitably raises security concerns about the privacy of the workers’ locations. In this paper, we propose a secure SC framework based on encryption, which ensures that workers’ location information is never released to any party, yet the system can still assign tasks to workers situated in proximity of each task’s location. We solve the challenge of assigning tasks based on encrypted data using homomorphic encryption. Moreover, to overcome the efficiency issue, we propose a novel secure indexing technique with a newly devised SKD-tree to index encrypted worker locations. Experiments on real-world data evaluate various aspects of the performance of the proposed SC platform.
Emotion processing and the amygdala: from a 'low road' to 'many roads' of evaluating biological significance
A subcortical pathway through the superior colliculus and pulvinar to the amygdala is commonly assumed to mediate the non-conscious processing of affective visual stimuli. We review anatomical and physiological data that argue against the notion that such a pathway plays a prominent part in processing affective visual stimuli in humans. Instead, we propose that the primary role of the amygdala in visual processing, like that of the pulvinar, is to coordinate the function of cortical networks during evaluation of the biological significance of affective visual stimuli. Under this revised framework, the cortex has a more important role in emotion processing than is traditionally assumed.
Correction: A Rapid Lateral Flow Immunoassay for the Detection of Tyrosine Phosphatase-Like Protein IA-2 Autoantibodies in Human Serum
Type 1 diabetes (T1D) results from the destruction of pancreatic insulin-producing beta cells and is strongly associated with the presence of islet autoantibodies. Autoantibodies to tyrosine phosphatase-like protein IA-2 (IA-2As) are considered to be highly predictive markers of T1D. We developed a novel lateral flow immunoassay (LFIA) based on a bridging format for the rapid detection of IA-2As in human serum samples. In this assay, one site of the IA-2As is bound to HA-tagged-IA-2, which is subsequently captured on the anti-HA-Tag antibody-coated test line on the strip. The other site of the IA-2As is bound to biotinylated IA-2, allowing the complex to be visualized using colloidal gold nanoparticle-conjugated streptavidin. For this study, 35 serum samples from T1D patients and 44 control sera from non-diabetic individuals were analyzed with our novel assay and the results were correlated with two IA-2A ELISAs. Among the 35 serum samples from T1D patients, the IA-2A LFIA, the in-house IA-2A ELISA and the commercial IA-2A ELISA identified as positive 21, 29 and 30 IA-2A-positive sera, respectively. The major advantages of the IA-2A LFIA are its rapidity and simplicity.
Zero Knowledge Proofs : A Survey
A zero knowledge interactive proof system allows one person to convince another person of some fact without revealing the information about the proof. In particular, it does not enable the verifier to later convince anyone else that the prover has a proof of the theorem or even merely that the theorem is true (much less that he himself has a proof). This paper reviews the field of zero knowledge proof systems giving a brief overview of zero knowledge proof systems and the state of current research in this field.
Building a Monolingual Parallel Corpus for Text Simplification Using Sentence Similarity Based on Alignment between Word Embeddings
Methods for text simplification using the framework of statistical machine translation have been extensively studied in recent years. However, building the monolingual parallel corpus necessary for training the model requires costly human annotation. Monolingual parallel corpora for text simplification have therefore been built only for a limited number of languages, such as English and Portuguese. To obviate the need for human annotation, we propose an unsupervised method that automatically builds the monolingual parallel corpus for text simplification using sentence similarity based on word embeddings. For any sentence pair comprising a complex sentence and its simple counterpart, we employ a many-to-one method of aligning each word in the complex sentence with the most similar word in the simple sentence and compute sentence similarity by averaging these word similarities. The experimental results demonstrate the excellent performance of the proposed method in a monolingual parallel corpus construction task for English text simplification. The results also demonstrated the superior accuracy in text simplification that use the framework of statistical machine translation trained using the corpus built by the proposed method to that using the existing corpora.
A Unified Framework for Contactless Hand Verification
Two-dimensional (2-D) hand-geometry features carry limited discriminatory information and therefore yield moderate performance when utilized for personal identification. This paper investigates a new approach to achieve performance improvement by simultaneously acquiring and combining three-dimensional (3-D) and 2-D features from the human hand. The proposed approach utilizes a 3-D digitizer to simultaneously acquire intensity and range images of the presented hands of the users in a completely contact-free manner. Two new representations that effectively characterize the local finger surface features are extracted from the acquired range images and are matched using the proposed matching metrics. In addition, the characterization of 3-D palm surface using SurfaceCode is proposed for matching a pair of 3-D palms. The proposed approach is evaluated on a database of 177 users acquired in two sessions. The experimental results suggest that the proposed 3-D hand-geometry features have significant discriminatory information to reliably authenticate individuals. Our experimental results demonstrate that consolidating 3-D and 2-D hand-geometry features results in significantly improved performance that cannot be achieved with the traditional 2-D hand-geometry features alone. Furthermore, this paper also investigates the performance improvement that can be achieved by integrating five biometric features, i.e., 2-D palmprint, 3-D palmprint, finger texture, along with 3-D and 2-D hand-geometry features, that are simultaneously extracted from the user's hand presented for authentication.
Apixaban for extended treatment of venous thromboembolism.
BACKGROUND Apixaban, an oral factor Xa inhibitor that can be administered in a simple, fixed-dose regimen, may be an option for the extended treatment of venous thromboembolism. METHODS In this randomized, double-blind study, we compared two doses of apixaban (2.5 mg and 5 mg, twice daily) with placebo in patients with venous thromboembolism who had completed 6 to 12 months of anticoagulation therapy and for whom there was clinical equipoise regarding the continuation or cessation of anticoagulation therapy. The study drugs were administered for 12 months. RESULTS A total of 2486 patients underwent randomization, of whom 2482 were included in the intention-to-treat analyses. Symptomatic recurrent venous thromboembolism or death from venous thromboembolism occurred in 73 of the 829 patients (8.8%) who were receiving placebo, as compared with 14 of the 840 patients (1.7%) who were receiving 2.5 mg of apixaban (a difference of 7.2 percentage points; 95% confidence interval [CI], 5.0 to 9.3) and 14 of the 813 patients (1.7%) who were receiving 5 mg of apixaban (a difference of 7.0 percentage points; 95% CI, 4.9 to 9.1) (P<0.001 for both comparisons). The rates of major bleeding were 0.5% in the placebo group, 0.2% in the 2.5-mg apixaban group, and 0.1% in the 5-mg apixaban group. The rates of clinically relevant nonmajor bleeding were 2.3% in the placebo group, 3.0% in the 2.5-mg apixaban group, and 4.2% in the 5-mg apixaban group. The rate of death from any cause was 1.7% in the placebo group, as compared with 0.8% in the 2.5-mg apixaban group and 0.5% in the 5-mg apixaban group. CONCLUSIONS Extended anticoagulation with apixaban at either a treatment dose (5 mg) or a thromboprophylactic dose (2.5 mg) reduced the risk of recurrent venous thromboembolism without increasing the rate of major bleeding. (Funded by Bristol-Myers Squibb and Pfizer; AMPLIFY-EXT ClinicalTrials.gov number, NCT00633893.).
Polyglucose-protein complexes.
Abstract The specific solubility parameters of glycogen-gelatin complexes are completely destroyed after brief exposure to proteolytic enzymes. The minimal polysaccharide-gelatin binding ratios for the complete binding of dextran, glycogen, dextrin, and soluble starch (erythrodextrin free) are 4, 1, 1, and 0.5, respectively, on a dry weight basis. The stoichiometric binding ratios of polyglucose-gelatin mixtures were independent of polyglucosan molecular weight classes. Sequential degradation of gelatin by autoclaving resulted in more rapid destruction of polysaccharide binding sites than the sites responsible for gelation. The 1 → 6 polyglucose binding sites were more labile than the 1 → 4 polyglucose binding sites on the gelatin molecule. Autoclaving of soluble starch, glycogen, or dextran had no effect on their ability to bind with gelatin. Competitive-inhibition studies suggest that the gelatin molecule possesses specific binding sites for the various polyglucosans. Reversible dissociation of polyglucose-gelatin complexes can be accomplished by 44% phenol/water extraction of polyglucose-gelatin complexes. The presence of polyglucose in alkaline Cu 2+ -gelatin solutions does not alter the ability of bound Cu 2+ to inhibit gelation of gelatin. The presence of bound forms of Cu 2+ in gelatin, polyglucosans and gelatin-polyglucose mixtures can be detected by analysis of ESR spectra and is dependent upon the pH of the respective solutions. The ESR spectra of Cu 2+ -gelatin complexes are different in acid and alkaline environments. At pH 4.5 the hyperfine structures of Cu 2+ -polyglucosans are destroyed by polyglucosan-gelatin interactions. At pH 10.5, Cu 2+ -doped gelatin and gelatin-glycogen mixtures exhibit similar ESR parameters whereas glycogen did not bind Cu 2+ . Possible relationships between Cu 2+ binding sites of these polymers and the specific sites responsible for both biopolymeric complex formation and the gelation phenomenon are discussed.
Streaming Hierarchical Video Segmentation
The use of video segmentation as an early processing step in video analysis lags behind the use of image segmentation for image analysis, despite many available video segmentation methods. A major reason for this lag is simply that videos are an order of magnitude bigger than images; yet most methods require all voxels in the video to be loaded into memory, which is clearly prohibitive for even medium length videos. We address this limitation by proposing an approximation framework for streaming hierarchical video segmentation motivated by data stream algorithms: each video frame is processed only once and does not change the segmentation of previous frames. We implement the graph-based hierarchical segmentation method within our streaming framework; our method is the first streaming hierarchical video segmentation method proposed. We perform thorough experimental analysis on a benchmark video data set and longer videos. Our results indicate the graph-based streaming hierarchical method outperforms other streaming video segmentation methods and performs nearly as well as the full-video hierarchical graph-based method.
An Empirical Study on Using the National Vulnerability Database to Predict Software Vulnerabilities
Software vulnerabilities represent a major cause of cybersecurity problems. The National Vulnerability Database (NVD) is a public data source that maintains standardized information about reported software vulnerabilities. Since its inception in 1997, NVD has published information about more than 43,000 software vulnerabilities affecting more than 17,000 software applications. This information is potentially valuable in understanding trends and patterns in software vulnerabilities, so that one can better manage the security of computer systems that are pestered by the ubiquitous software security flaws. In particular, one would like to be able to predict the likelihood that a piece of software contains a yet-to-be-discovered vulnerability, which must be taken into account in security management due to the increasing trend in zero-day attacks. We conducted an empirical study on applying data-mining techniques on NVD data with the objective of predicting the time to next vulnerability for a given software application. We experimented with various features constructed using the information available in NVD, and applied various machine learning algorithms to examine the predictive power of the data. Our results show that the data in NVD generally have poor prediction capability, with the exception of a few vendors and software applications. By doing a large number of experiments and observing the data, we suggest several reasons for why the NVD data have not produced a reasonable prediction model for time to next vulnerability with our current approach.
Frictional adhesion: A new angle on gecko attachment.
Directional arrays of branched microscopic setae constitute a dry adhesive on the toes of pad-bearing geckos, nature's supreme climbers. Geckos are easily and rapidly able to detach their toes as they climb. There are two known mechanisms of detachment: (1) on the microscale, the seta detaches when the shaft reaches a critical angle with the substrate, and (2) on the macroscale, geckos hyperextend their toes, apparently peeling like tape. This raises the question of how geckos prevent detachment while inverted on the ceiling, where body weight should cause toes to peel and setal angles to increase. Geckos use opposing feet and toes while inverted, possibly to maintain shear forces that prevent detachment of setae or peeling of toes. If detachment occurs by macroscale peeling of toes, the peel angle should monotonically decrease with applied force. In contrast, if adhesive force is limited by microscale detachment of setae at a critical angle, the toe detachment angle should be independent of applied force. We tested the hypothesis that adhesion is increased by shear force in isolated setal arrays and live gecko toes. We also tested the corollary hypotheses that (1) adhesion in toes and arrays is limited as on the microscale by a critical angle, or (2) on the macroscale by adhesive strength as predicted for adhesive tapes. We found that adhesion depended directly on shear force, and was independent of detachment angle. Therefore we reject the hypothesis that gecko toes peel like tape. The linear relation between adhesion and shear force is consistent with a critical angle of release in live gecko toes and isolated setal arrays, and also with our prior observations of single setae. We introduced a new model, frictional adhesion, for gecko pad attachment and compared it to existing models of adhesive contacts. In an analysis of clinging stability of a gecko on an inclined plane each adhesive model predicted a different force control strategy. The frictional adhesion model provides an explanation for the very low detachment forces observed in climbing geckos that does not depend on toe peeling.
An image conveys a message: A brief survey on image description generation
Image caption generation is the problem of generating a descriptive sentence of an image. Automatically describing the content of an image is a fundamental problem in artificial intelligence that connects computer vision and natural language processing. This paper presents a brief survey of some technical aspects and methods for description-generation of images. As there has been great interest in research community, to come up with automatic ways to retrieve images based on content. There are numbers of techniques, that, have been used to solve this problem, and purpose of this paper is to have an overview of many of these approaches and databases used for description generation purpose. Finally, we discuss open challenges and future directions for upcoming researchers.
Limiting, leaving, and (re)lapsing: an exploration of facebook non-use practices and experiences
Despite the abundance of research on social networking sites, relatively little research has studied those who choose not to use such sites. This paper presents results from a questionnaire of over 400 Internet users, focusing specifically on Facebook and those users who have left the service. Results show the lack of a clear, binary distinction between use and non-use, that various practices enable diverse ways and degrees of engagement with and disengagement from Facebook. Furthermore, qualitative analysis reveals numerous complex and interrelated motivations and justifications, both for leaving and for maintaining some type of connection. These motivations include: privacy, data misuse, productivity, banality, addiction, and external pressures. These results not only contribute to our understanding of online sociality by examining this under-explored area, but they also build on previous work to help advance how we conceptually account for the sociological processes of non-use.
Towards Dynamic Demand Response On Efficient Consumer Grouping Algorithmics
The widespread monitoring of electricity consumption due to increasingly pervasive deployment of networked sensors in urban environments has resulted in an unprecedentedly large volume of data being collected. Particularly, with the emerging Smart Grid technologies becoming more ubiquitous, real-time and online analytics for discovering the underlying structure of increasing-dimensional (w.r.t. time) consumer time series data are crucial to convert the massive amount of fine-grained energy information gathered from residential smart meters into appropriate demand response (DR) insights. In this paper we propose READER and OPTIC, that are real-time and online algorithmic pre-processing frameworks respectively, for effective DR in the Smart Grid. READER (OPTIC) helps discover underlying structure from increasing-dimensional consumer consumption time series data in a provably optimal real-time (online) fashion. READER (OPTIC) catalyzes the efficacy of DR programs by systematically and efficiently managing the energy consumption data deluge, at the same time capturing in real-time (online), specific behavior, i.e., households or time instants with similar consumption patterns. The primary feature of READER (OPTIC) is a real-time (online) randomized approximation algorithm for grouping consumers based on their electricity consumption time series data, and provides two crucial benefits: (i) time efficiently tackles high volume, increasing-dimensional time series data and (ii) provides provable worst case grouping performance guarantees. We validate the grouping and DR efficacy of READER and OPTIC via extensive experiments conducted on both, a USC microgrid dataset as well as a synthetically generated dataset.
Effect of Selfish Node Behavior on Efficient Topology Design
The problem of topology control is to assign per-node transmission power such that the resulting topology is energy efficient and satisfies certain global properties such as connectivity. The conventional approach to achieve these objectives is based on the fundamental assumption that nodes are socially responsible. We examine the following question: if nodes behave in a selfish manner, how does it impact the overall connectivity and energy consumption in the resulting topologies? We pose the above problem as a noncooperative game and use game-theoretic analysis to address it. We study Nash equilibrium properties of the topology control game and evaluate the efficiency of the induced topology when nodes employ a greedy best response algorithm. We show that even when the nodes have complete information about the network, the steady-state topologies are suboptimal. We propose a modified algorithm based on a better response dynamic and show that this algorithm is guaranteed to converge to energy-efficient and connected topologies. Moreover, the node transmit power levels are more evenly distributed, and the network performance is comparable to that obtained from centralized algorithms.
Natural Negotiation for Believable Agents
Believable agents will often need to engage in social behaviors with other agents and with a user. Believable social behaviors need to meet a number of requirements: they must be robust, they must reflect and affect the emotional state of the agent, they must take into account the interpersonal relationships with the other behavior participants, and, most importantly, they must show off the artistically defined personality of the agent. We will show how to create a negotiation behavior that supports believability for specific characters and address some methodological questions about how to build believable social behaviors in general.
Handwritten Character Recognition Using Histograms of Oriented Gradient Features in Deep Learning of Artificial Neural Network
Feature extraction plays an essential role in hand written character recognition because of its effect on the capability of classifiers. This paper presents a framework for investigating and comparing the recognition ability of two classifiers: Deep-Learning Feedforward-Backpropagation Neural Network (DFBNN) and Extreme Learning Machine (ELM). Three data sets: Thai handwritten characters, Bangla handwritten numerals, and Devanagari handwritten numerals were studied. Each data set was divided into two categories: non-extracted and extracted features by Histograms of Oriented Gradients (HOG). The experimental results showed that using HOG to extract features can improve recognition rates of both of DFBNN and ELM. Furthermore, DFBNN provides higher slightly recognition rates than those of ELM.
Incidence of maternal and paternal depression in primary care: a cohort study using a primary care database.
OBJECTIVE To examine incidence, trends, and correlates of parental depression in primary care from 0 to 12 years of child age. DESIGN Prospective cohort study. SETTING Primary care records from more than 350 general practices in The Health Improvement Network database from 1993 to 2007. PARTICIPANTS A total of 86 957 mother, father, and child triads identified in The Health Improvement Network database by linking mothers and babies and then identifying an adult household man. Depressed parents were identified using Read code entries for depression and antidepressant prescriptions. MAIN EXPOSURES Child age, parental age at the birth, and area deprivation quintile. MAIN OUTCOME MEASURES Incidence rates for maternal and paternal episodes of depression. RESULTS Overall incidences of depression from the birth of the child up to age 12 years were 7.53 per 100 person-years for mothers and 2.69 per 100 person-years for fathers. Depression was highest in the first year post partum (13.93 and 3.56 per 100 person-years among mothers and fathers, respectively). By 12 years of child age, 39% of mothers and 21% of fathers had experienced an episode of depression. A history of depression, lower parental age at the birth of the child, and higher social deprivation were associated with a higher incidence of parental depression. CONCLUSIONS Parents are at highest risk for depression in the first year after the birth of their child. Parents with a history of depression, younger parents, and those from deprived areas are particularly vulnerable to depression. There is a need for appropriate recognition and management of parental depression in primary care.
A pattern of tectonic zones in the western part of the East European Platform
Abstract The present north-eastern and south-western limits of the East European Platform are formed by long and deep faulted troughs with some elements of strike-slip movements. Between these westerly converging, craton margins, four other subparallel tectonic zones are proposed, with a mutual spacing of a few hundred kilometres. The western part of the platform is thus divided into five major blocks, which are further transected by some series of faults with mainly north-westerly and north-easterly, northerly and easterly trends. Geological and geophysical records indicate the pervasive character of these major tectonic zones, in which intermittent magmatic activity and mineralization of heavy metals is found. The lineaments are considered to have persisted from Archaean time and been re-activated during successively younger orogenies in the platform. This pattern of tectonic zones is assumed to represent a structural anisotropy, imprinted in the crust and the upper mantle at a relatively early period i...
Interactive ramp merging planning in autonomous driving: Multi-merging leading PGM (MML-PGM)
Cooperative driving behavior is essential for driving in traffic, especially for ramp merging, lane changing or navigating intersections. Autonomous vehicles should also manage these situations by behaving cooperatively and naturally. The challenge of cooperative driving is estimating other vehicles' intentions. In this paper, we present a novel method to estimate other human-driven vehicles' intentions with the aim of achieving a natural and amenable cooperative driving behavior, without using wireless communication. The new approach allows the autonomous vehicle to cooperate with multiple observable merging vehicles on the ramp with a leading vehicle ahead of the autonomous vehicle in the same lane. To avoid calculating trajectories, simplify computation, and take advantage of mature Level-3 components, the new method reacts to merging cars by determining a following target for an off-the-shelf distance keeping module (ACC) which governs speed control of the autonomous vehicle. We train and evaluate the proposed model using real traffic data. Results show that the new approach has a lower collision rate than previous methods and generates more human driver-like behaviors in terms of trajectory similarity and time-to-collision to leading vehicles.
Dynamic smile visualization and quantification: part 1. Evolution of the concept and dynamic records for smile capture.
The "art of the smile" lies in the clinician's ability to recognize the positive elements of beauty in each patient and to create a strategy to enhance the attributes that fall outside the parameters of the prevailing esthetic concept. New technologies have enhanced our ability to see our patients more dynamically and facilitated the quantification and communication of newer concepts of function and appearance. In a 2-part article, we present a comprehensive methodology for recording, assessing, and planning treatment of the smile in 4 dimensions. In part 1, we discuss the evolution of smile analysis and review the dynamic records needed. In part 2, we will review smile analysis and treatment strategies and present a brief case report.
Comparison of loading rate-dependent injury modes in a murine model of post-traumatic osteoarthritis.
Post-traumatic osteoarthritis (PTOA) is a common long-term consequence of joint injuries such as anterior cruciate ligament (ACL) rupture. In this study we used a tibial compression overload mouse model to compare knee injury induced at low speed (1 mm/s), which creates an avulsion fracture, to injury induced at high speed (500 mm/s), which induces midsubstance tear of the ACL. Mice were sacrificed at 0 days, 10 days, 12 weeks, or 16 weeks post-injury, and joints were analyzed with micro-computed tomography, whole joint histology, and biomechanical laxity testing. Knee injury with both injury modes caused considerable trabecular bone loss by 10 days post-injury, with the Low Speed Injury group (avulsion) exhibiting a greater amount of bone loss than the High Speed Injury group (midsubstance tear). Immediately after injury, both injury modes resulted in greater than twofold increases in total AP joint laxity relative to control knees. By 12 and 16 weeks post-injury, total AP laxity was restored to uninjured control values, possibly due to knee stabilization via osteophyte formation. This model presents an opportunity to explore fundamental questions regarding the role of bone turnover in PTOA, and the findings of this study support a biomechanical mechanism of osteophyte formation following injury.
A Review of Experiences with Reliable Multicast
By understanding how real users have employed reliable multicast in real distributed systems, we can develop insight concerning the degree to which this technology has matched expectations. This paper reviews a number of applications with that goal in mind. Our findings point to tradeoffs between the form of reliability used by a system and its scalability and performance. We also find that to reach a broad user community (and a commercially interesting market) the technology must be better integrated with component and object-oriented systems architectures. Looking closely at these architectures, however, we identify some assumptions about failure handling which make reliable multicast difficult to exploit. Indeed, the major failures of reliable multicast are associated wit failures. The broader opportunity appears to involve relatively visible embeddings of these tools int h attempts to position it within object oriented systems in ways that focus on transparent recovery from server o object-oriented architectures enabling knowledgeable users to make tradeoffs. Fault-tolerance through transparent server replication may be better viewed as an unachievable holy grail.
Acne scarring treatment using skin needling.
BACKGROUND Acne is a common condition seen in up to 80% of people between 11 and 30 years of age and in up to 5% of older adults. In some patients, it can result in permanent scars that are surprisingly difficult to treat. A relatively new treatment, termed skin needling (needle dermabrasion), seems to be appropriate for the treatment of rolling scars in acne. AIM To confirm the usefulness of skin needling in acne scarring treatment. METHODS The present study was conducted from September 2007 to March 2008 at the Department of Systemic Pathology, University of Naples Federico II and the UOC Dermatology Unit, University of Rome La Sapienza. In total, 32 patients (20 female, 12 male patients; age range 17-45) with acne rolling scars were enrolled. Each patient was treated with a specific tool in two sessions. Using digital cameras, photos of all patients were taken to evaluate scar depth and, in five patients, silicone rubber was used to make a microrelief impression of the scars. The photographic data were analysed by using the sign test statistic (alpha < 0.05) and the data from the cutaneous casts were analysed by fast Fourier transformation (FFT). RESULTS Analysis of the patient photographs, supported by the sign test and of the degree of irregularity of the surface microrelief, supported by FFT, showed that, after only two sessions, the severity grade of rolling scars in all patients was greatly reduced and there was an overall aesthetic improvement. No patient showed any visible signs of the procedure or hyperpigmentation. CONCLUSION The present study confirms that skin needling has an immediate effect in improving acne rolling scars and has advantages over other procedures.
Multipurpose silicon photonics signal processor core
Integrated photonics changes the scaling laws of information and communication systems offering architectural choices that combine photonics with electronics to optimize performance, power, footprint, and cost. Application-specific photonic integrated circuits, where particular circuits/chips are designed to optimally perform particular functionalities, require a considerable number of design and fabrication iterations leading to long development times. A different approach inspired by electronic Field Programmable Gate Arrays is the programmable photonic processor, where a common hardware implemented by a two-dimensional photonic waveguide mesh realizes different functionalities through programming. Here, we report the demonstration of such reconfigurable waveguide mesh in silicon. We demonstrate over 20 different functionalities with a simple seven hexagonal cell structure, which can be applied to different fields including communications, chemical and biomedical sensing, signal processing, multiprocessor networks, and quantum information systems. Our work is an important step toward this paradigm. Integrated optical circuits today are typically designed for a few special functionalities and require complex design and development procedures. Here, the authors demonstrate a reconfigurable but simple silicon waveguide mesh with different functionalities.
Reconstructing the feedback polynomial of a linear scrambler with the method of hypothesis testing
Blind identification of the feedback polynomial in a linear scrambler is an important component in the wireless communication systems which have the function of automatic standard recognition and parameter estimation. The following research contains three parts. In the first part, promotion model is deduced based on Cluzeau’s model. Comparative analysis shows the advantage of computational complexity by use of the promotion model. In the second part, the improved method of estimation for the normalised standard deviation is proposed, which plays an important role for the computation time and precision. In the final part, simulation results explain the better practical application of my algorithm compared with Cluzeau’s algorithm and XiaoBei-Liu’s algorithm.
The Incremental Effects of Manual Therapy or Booster Sessions in Addition to Exercise Therapy for Knee Osteoarthritis: A Randomized Clinical Trial.
STUDY DESIGN A factorial randomized controlled trial. OBJECTIVES To investigate the addition of manual therapy to exercise therapy for the reduction of pain and increase of physical function in people with knee osteoarthritis (OA), and whether "booster sessions" compared to consecutive sessions may improve outcomes. BACKGROUND The benefits of providing manual therapy in addition to exercise therapy, or of distributing treatment sessions over time using periodic booster sessions, in people with knee OA are not well established. METHODS All participants had knee OA and were provided 12 sessions of multimodal exercise therapy supervised by a physical therapist. Participants were randomly allocated to 1 of 4 groups: exercise therapy in consecutive sessions, exercise therapy distributed over a year using booster sessions, exercise therapy plus manual therapy without booster sessions, and exercise therapy plus manual therapy with booster sessions. The primary outcome measure was the Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC score; 0-240 scale) at 1-year follow-up. Secondary outcome measures were the numeric pain-rating scale and physical performance tests. RESULTS Of 75 participants recruited, 66 (88%) were retained at 1-year follow-up. Factorial analysis of covariance of the main effects showed significant benefit from booster sessions (P = .009) and manual therapy (P = .023) over exercise therapy alone. Group analysis showed that exercise therapy with booster sessions (WOMAC score, -46.0 points; 95% confidence interval [CI]: -80.0, -12.0) and exercise therapy plus manual therapy (WOMAC score, -37.5 points; 95% CI: -69.7, -5.5) had superior effects compared with exercise therapy alone. The combined strategy of exercise therapy plus manual therapy with booster sessions was not superior to exercise therapy alone. CONCLUSION Distributing 12 sessions of exercise therapy over a year in the form of booster sessions was more effective than providing 12 consecutive exercise therapy sessions. Providing manual therapy in addition to exercise therapy improved treatment effectiveness compared to providing 12 consecutive exercise therapy sessions alone. Trial registered with the Australian New Zealand Clinical Trials Registry (ACTRN12612000460808).
Extending the TAM for a World-Wide-Web context
Ease of use and usefulness are believed to be fundamental in determining the acceptance and use of various, corporate ITs. These beliefs, however, may not explain the user's behavior toward newly emerging ITs, such as the World-Wide-Web (WWW). In this study, we introduce playfulness as a new factor that re ̄ects the user's intrinsic belief in WWW acceptance. Using it as an intrinsic motivation factor, we extend and empirically validate the Technology Acceptance Model (TAM) for the WWW context. # 2001 Elsevier Science B.V. All rights reserved.
Hermes: a distributed event-based middleware architecture
In this paper, we argue that there is a need for an event-based middleware to build large-scale distributed systems. Existing publish/subscribe systems still have limitations compared to invocation-based middlewares. We introduce Hermes, a novel event-based distributed middleware architecture that follows a type- and attribute-based publish/subscribe model. It centres around the notion of an event type and supports features commonly known from object-oriented languages like type hierarchies and super-type subscriptions. A scalable routing algorithm using an overlay routing network is presented that avoids global broadcasts by creating rendezvous nodes. Fault-tolerance mechanisms that can cope with different kinds of failures in the middleware are integrated with the routing algorithm resulting in a scalable and robust system.