title
stringlengths
8
300
abstract
stringlengths
0
10k
Adding numbers to text classification
Many real-world problems involve a combination of both text- and numerical-valued features. For example, in email classification, it is possible to use instance representations that consider not only the text of each message, but also numerical-valued features such as the length of the message or the time of day at which it was sent. Text-classification methods have thus far not easily incorporated numerical features. In earlier work we described an approach for converting numerical features into bags of tokens so that text classification methods can be applied to numerical classification problems, and showed that the resulting learning methods are competitive with traditional numerical classification methods. In this paper we use this as a way to learn on problems that involve a combination of text and numbers. We show that the results outperform competing methods. Further, we show that selecting a best classification method using text-only features and then adding numerical features to the problem (as might happen if numerical features are only later added to a pre existing text-classification problem) gives performance that rivals a more time-consuming approach of evaluating all classification methods using the full set of both text and numerical features.
Effect of altitude on urinary leukotriene (LT) E4 excretion and airway responsiveness to histamine in children with atopic asthma.
Asthmatic subjects who are resident at altitude may experience a deterioration in lung function following a stay at sea level. To determine whether measurement of urinary leukotriene E4 (LTE4) reflects changes in asthma severity and airway responsiveness, 14 allergic asthmatic subjects resident at altitude (1560 m, Davos, Switzerland) were studied. Subjects were randomly divided into two groups. Measurements of baseline forced expiratory volume in one second (FEV1), the concentration of histamine producing a 20% decrease in FEV1, (PC20 FEV1), serum total immunoglobulin E (IgE), eosinophil count, and urinary LTE4 concentration were determined prior to and following a 2 week stay in The Netherlands (sea level) in eight subjects (4 males and 4 females, aged 14 +/- 0.5 yrs) (mean +/- SEM) and over a similar time period in six subjects (4 males and 2 females, aged 15 +/- 0.3 yrs) resident in Davos, Switzerland. There was no significant difference in total IgE and eosinophil count, and no significant correlation between urinary LTE4 and PC20FEV1 histamine, FEV1, total IgE, and eosinophil count. In subjects returning to Davos from The Netherlands there was a significant increase in urinary LTE4 from a baseline value of 16.9 pg.mg-1 creatinine (GM, range 0.3-101.7 pg.mg-1 creatinine) to 52.3 pg.mg-1 creatinine (GM, range 8.8-301.6 pg.mg-1 creatinine), a significant decrease in PC20FEV1 from 1.7 mg.ml-1 (GM, range 0.3-16.4 mg.ml-1) to 0.9 mg.ml-1 (GM, range 0.1-->32 mg.ml-1), and a significant fall in FEV1 from 3.0 +/- 0.3 to 2.8 +/- 0.3 l (mean +/- SEM).(ABSTRACT TRUNCATED AT 250 WORDS)
Design of air quality meter and pollution detector
This work describes the design and implementation of a Air Quality Meter and Air Pollution Detector. The technology adopted is a practical implementation of the concept of Internet of Things. This project is an exploration of the possibilities of use of this technology in a rapidly changing world where environmental health is becoming a serious threat. Using the Bluetooth communication technology and microcontroller board of Arduino, the work is implemented. A few sensors are also used such as temperature and humidity sensors and a few gas sensors to monitor changes.
SPICE Model of Memristor with Nonlinear Dopant Drift
A mathematical model of the prototype of memristor, manufactured in 2008 in Hewlett-Packard Labs, is described in the paper. It is shown that the hitherto published approaches to the modeling of boundary conditions need not conform with the requirements for the behavior of a practical circuit element. The described SPICE model of the memristor is thus constructed as an open model, enabling additional modifications of non-linear boundary conditions. Its functionality is illustrated on computer simulations.
Liberal perspectives for South Asia
"Liberal Perspectives for South Asia" discusses the essentials of the liberal philosophy, while also indicating how appropriate it is in the South Asian context. In the past, the subcontinent was renowned for the skill with which it took up the dominant ideologies of the west and articulated them for the Asian context. In the post-colonial period, the only dominant ideology that was sidetracked by all political parties was liberalism, the ideology that promoted freedom of the individual. The idea of a book about the need for liberalism in the subcontinent was the brainchild of Chanaka Amaratunga, who set up the first avowedly Liberal Party in Sri Lanka. Many political parties have implemented liberal policies on an ad hoc basis and without a proper framework to guide them. Not all parties would accept all aspects of a liberal programme, however, in a context in which many parties are seeking an ideology that accords both with the present times and trends, and also with some of the goals they accepted in the past. It is hoped that this volume will provide food for thought and ideas for adoption and incorporation within the party programme. Ranging from erudite expositions of classic liberal thinkers to lively discussions of liberal economic principles put into practice by imaginative entrepreneurs, this volume is essential reading for a region making a swift transition into the contemporary, globalized world.
Network-based statistic: Identifying differences in brain networks
Large-scale functional or structural brain connectivity can be modeled as a network, or graph. This paper presents a statistical approach to identify connections in such a graph that may be associated with a diagnostic status in case-control studies, changing psychological contexts in task-based studies, or correlations with various cognitive and behavioral measures. The new approach, called the network-based statistic (NBS), is a method to control the family-wise error rate (in the weak sense) when mass-univariate testing is performed at every connection comprising the graph. To potentially offer a substantial gain in power, the NBS exploits the extent to which the connections comprising the contrast or effect of interest are interconnected. The NBS is based on the principles underpinning traditional cluster-based thresholding of statistical parametric maps. The purpose of this paper is to: (i) introduce the NBS for the first time; (ii) evaluate its power with the use of receiver operating characteristic (ROC) curves; and, (iii) demonstrate its utility with application to a real case-control study involving a group of people with schizophrenia for which resting-state functional MRI data were acquired. The NBS identified a expansive dysconnected subnetwork in the group with schizophrenia, primarily comprising fronto-temporal and occipito-temporal dysconnections, whereas a mass-univariate analysis controlled with the false discovery rate failed to identify a subnetwork.
Image and Pathological Changes after Radiofrequency Ablation of Invasive Breast Cancer: A Pilot Study of Nonsurgical Therapy of Early Breast Cancer
The surgical treatment of early breast cancer has proceeded to less invasive approaches with better cosmetic results. The current study was undertaken to evaluate the clinical and pathological findings after radiofrequency ablation (RFA) without resection for a longer period of time. A total of 14 patients with breast cancer were enrolled. All patients were diagnosed to have invasive ductal carcinoma, and the median breast tumor size was 12 mm (range, 6–20 mm). Six patients received RFA treatment followed by immediate resection and eight patients without resection. The patients without resection were evaluated by ultrasound, MRI, and the pathological findings of a core needle biopsy after RFA. The removed specimens were examined by hematoxylin-eosin (HE) staining and nicotinamide adenine dinucleotide (NADH) diaphorase staining. The median follow-up of the patients was 39.9 months. NADH staining was necessary to diagnose complete tumor cell death in the tissue for 3 months after RFA. However, HE staining alone could confirm the effect without NADH staining more than 6 months after RFA. Post-RFA, MRI scans clearly demonstrated the area as a complete ablated lesion in all patients without resection. The ablated area detected by MRI or ultrasound became gradually smaller. All patients that underwent RFA with no resection were alive without relapse. RFA therefore could be an effective alternative to partial mastectomy for early breast cancer. Further research will be necessary to establish the standardization of the indications, as well as the optimal techniques and post treatment evaluation modalities.
Dendritic Cell-Based Xenoantigen Vaccination for Prostate Cancer Immunotherapy 1
Many tumor-associated Ags represent tissue differentiation Ags that are poorly immunogenic. Their weak immunogenicity may be due to immune tolerance to self-Ags. Prostatic acid phosphatase (PAP) is just such an Ag that is expressed by both normal and malignant prostate tissue. We have previously demonstrated that PAP can be immunogenic in a rodent model. However, generation of prostate-specific autoimmunity was seen only when a xenogeneic homolog of PAP was used as the immunogen. To explore the potential role of xenoantigen immunization in cancer patients, we performed a phase I clinical trial using dendritic cells pulsed with recombinant mouse PAP as a tumor vaccine. Twenty-one patients with metastatic prostate cancer received two monthly vaccinations of xenoantigen-loaded dendritic cells with minimal treatment-associated side effects. All patients developed T cell immunity to mouse PAP following immunization. Eleven of the 21 patients also developed T cell proliferative responses to the homologous self-Ag. These responses were associated with Ag-specific IFNand/or TNFsecretion, but not IL-4, consistent with induction of Th1 immunity. Finally, 6 of 21 patients had clinical stabilization of their previously progressing prostate cancer. All six of these patients developed T cell immunity to human PAP following vaccination. These results demonstrate that xenoantigen immunization can break tolerance to a self-Ag in humans, resulting in a clinically significant antitumor effect. The Journal of Immunology, 2001, 167: 7150–7156.
Developmental Consequences of Fetal Exposure to Drugs: What We Know and What We Still Must Learn
Most drugs of abuse easily cross the placenta and can affect fetal brain development. In utero exposures to drugs thus can have long-lasting implications for brain structure and function. These effects on the developing nervous system, before homeostatic regulatory mechanisms are properly calibrated, often differ from their effects on mature systems. In this review, we describe current knowledge on how alcohol, nicotine, cocaine, amphetamine, Ecstasy, and opiates (among other drugs) produce alterations in neurodevelopmental trajectory. We focus both on animal models and available clinical and imaging data from cross-sectional and longitudinal human studies. Early studies of fetal exposures focused on classic teratological methods that are insufficient for revealing more subtle effects that are nevertheless very behaviorally relevant. Modern mechanistic approaches have informed us greatly as to how to potentially ameliorate the induced deficits in brain formation and function, but conclude that better delineation of sensitive periods, dose–response relationships, and long-term longitudinal studies assessing future risk of offspring to exhibit learning disabilities, mental health disorders, and limited neural adaptations are crucial to limit the societal impact of these exposures.
DETERMINING TRENDS IN GLOBAL CRIME AND JUSTICE : AN OVERVIEW OF RESULTS FROM THE UNITED NATIONS SURVEYS OF CRIME TRENDS AND OPERATIONS OF CRIMINAL JUSTICE SYSTEMS
Effectively measuring comparative developments in crime and justice trends from a global perspective remains a key challenge for international policy makers. The ability to compare crime levels across countries enables policy makers to determine where interventions should occur and improves understanding of the key causes of crime in different societies across the globe. Nevertheless, there are significant challenges to comparative work in the field of criminal justice, not least of which is the ability to quantify accurately levels of crime across countries. Taking into account the methodological weaknesses of using crosscountry data sources, the present article provides conclusions obtained by analysing the large amount of data available from the various United Nations surveys of crime trends and operations of criminal justice systems. “Not everything that can be counted, counts. And not everything that counts can be counted.” Albert Einstein
RelNN: A Deep Neural Model for Relational Learning
Statistical relational AI (StarAI) aims at reasoning and learning in noisy domains described in terms of objects and relationships by combining probability with first-order logic. With huge advances in deep learning in the current years, combining deep networks with first-order logic has been the focus of several recent studies. Many of the existing attempts, however, only focus on relations and ignore object properties. The attempts that do consider object properties are limited in terms of modelling power or scalability. In this paper, we develop relational neural networks (RelNNs) by adding hidden layers to relational logistic regression (the relational counterpart of logistic regression). We learn latent properties for objects both directly and through general rules. Back-propagation is used for training these models. A modular, layer-wise architecture facilitates utilizing the techniques developed within deep learning community to our architecture. Initial experiments on eight tasks over three real-world datasets show that RelNNs are promising models for relational learning.
Examining micro-payments for participatory sensing data collections
The rapid adoption of mobile devices that are able to capture and transmit a wide variety of sensing modalities (media and location) has enabled a new data collection paradigm - participatory sensing. Participatory sensing initiatives organize individuals to gather sensed information using mobile devices through cooperative data collection. A major factor in the success of these data collection projects is sustained, high quality participation. However, since data capture requires a time and energy commitment from individuals, incentives are often introduced to motivate participants. In this work, we investigate the use of micro-payments as an incentive model. We define a set of metrics that can be used to evaluate the effectiveness of incentives and report on findings from a pilot study using various micro-payment schemes in a university campus sustainability initiative.
Individualised and complex experiences of integrative cancer support care: combining qualitative and quantitative data
The widespread use of complementary therapies alongside biomedical treatment by people with cancer is not supported by evidence from clinical trials. We aimed to use combined qualitative and quantitative data to describe and measure individualised experiences and outcomes. In three integrative cancer support centres (two breast cancer only) in the UK, consecutive patients completed the individualised outcome questionnaire Measure Yourself Concerns and Wellbeing (MYCaW) before and after treatment. MYCaW collects quantitative data (seven-point scales) and written qualitative data and the qualitative data were analysed using published categories. Seven hundred eighty-two participants, 92% female, mean age 51 years, nominated a wide range of concerns. Psychological and emotional concerns predominated. At follow-up, the mean change (improvement) in scores (n = 588) were: concern 1, 2.06 (95% CI 1.92–2.20); concern 2, 1.74 (95% CI 1.60–1.90); and well-being, 0.64 (95% CI 0.52–0.75). The most common responses to ‘what has been the most important aspect for you?’ were ‘receiving complementary therapies on an individual or group basis’ (26.2%); ‘support and understanding received from therapists’ (17.1%) and ‘time spent with other patients at the centres’ (16.1%). Positive (61.5%) and negative (38.5%) descriptions of ‘other things affecting your health’ correlated with larger and smaller improvement in concerns and well-being, respectively. In a multicentre evaluation, the MYCaW questionnaire provides rich data about patient experience, changes over time and perceptions of what was important to each individual with cancer within that experience. It is unlikely that meaningful evaluations of this complex intervention could be carried out by quantitative methods alone.
An anatomy of a YouTube meme
Launched in 2005 as a video-sharing website, YouTube has become an emblem of participatory culture. A central feature of this website is the dazzling number of derivative videos, uploaded daily by many thousands. Using the ‘meme’ concept as an analytic tool, this article aims at uncovering the attributes common to ‘memetic videos’ – popular clips that generate extensive user engagement by way of creative derivatives. Drawing on YouTube popularity-measurements and on user-generated playlists, a corpus of 30 prominent memetic videos was assembled. A combined qualitative and quantitative analysis of these videos yielded six common features: focus on ordinary people, flawed masculinity, humor, simplicity, repetitiveness and whimsical content. Each of these attributes marks the video as incomplete or flawed, thereby invoking further creative dialogue. In its concluding section, the article addresses the skyrocketing popularity of mimicking in contemporary digital culture, linking it to economic, social and cultural logics of participation.
An IoT Data Communication Framework for Authenticity and Integrity
Internet of Things has been widely applied in everyday life, ranging from transportation, healthcare, to smart homes. As most IoT devices carry constrained resources and limited storage capacity, sensing data need to be transmitted to and stored at resource-rich platforms, such as a cloud. IoT applications retrieve sensing data from the cloud for analysis and decision-making purposes. Ensuring the authenticity and integrity of the sensing data is essential for the correctness and safety of IoT applications. We summarize the new challenges of the IoT data communication framework with authenticity and integrity and argue that existing solutions cannot be easily adopted. We present two solutions, called Dynamic Tree Chaining (DTC) and Geometric Star Chaining (GSC) that provide authenticity, integrity, sampling uniformity, system efficiency, and application flexibility to IoT data communication. Extensive simulations and prototype emulation experiments driven by real IoT data show that the proposed system is more efficient than alternative solutions in terms of time and space.
Towards Autonomous Excavation of Fragmented Rock: Experiments, Modelling, Identification and Control
Towards Autonomous Excavation of Fragmented Rock: Experiments, Modelling, Identification and Control Joshua Alexander Marshall Master of Science (Engineering) Department of Mechanical Engineering Queen’s University August, 2001 Increased competition and recent globalization in the minerals industry has resulted in further demands for advanced mining equipment technology. The autonomous excavation problem for fragmented rock entails the design of a control system capable of regulating the complicated bucket-rock interactions that occur during excavation operations. A notable amount of previous work has been performed to address the excavation automation problem, although this work has mainly focused on the excavation of granular material such as soil rather than on rock. Moreover, none has resulted in widespread industry adoption of automated rock loading technologies. This thesis revisits the autonomous excavation problem for fragmented rock from a fresh perspective, and in particular, focuses on the problem of autonomous excavation using load-haul-dump (LHD) underground mining machines. First, an extensive review of the state-of-the-art in autonomous excavation is provided. Then, results of pioneering full-scale experimental studies are provided. These studies were carried out with the intent of identifying the evolution of machine parameters during freespace motions of the employed LHD mechanism, and during a selection of excavation trials conducted by skilled operators in fragmented rock typical of an underground hard-rock mining scenario. It was discovered that information contained within actuating cylinder pressure signals might offer a means for identification of the bucket-rock interaction status. Having reviewed the conventional techniques for development of robot dynamical equations of motion, the results are presented of modelling the LHD loader mechanism
Prevalence of tinea pedis, tinea unguium of toenails and tinea capitis in school children from Barcelona.
OBJECTIVE To evaluate the prevalence of tinea capitis, tinea pedis, and tinea unguium in children from several schools of Barcelona city. METHODS During the period of 2003-2004, a prospective cross-sectional study was carried out in 1,305 children (9% immigrant population) between the ages 3 and 15 in 17 schools in Barcelona. A systematic examination of the feet, (including nails and scalp), was performed to identify lesions compatible with tinea. Cultures of scalp and feet samples were done and analysis of environmental samples was performed for dermatophyte isolation. RESULTS Dermatophytes were isolated in 2.9% of the samples with a prevalence of 2.5% in feet, 0.23% in scalp, and 0.15% in nails of the feet. The predominant etiologic agents in feet were Trichophyton mentagrophytes in 45.7% of the cases and Trichophyton rubrum in 31.4%. In the nails, T. rubrum and Trichophyton tonsurans were isolated, while T. mentagrophytes (2 cases) and Trichophyton violaceum (1 case) were identified in scalp samples. Forty-five per cent of dermatophytes were isolated from healthy feet, the majority of cases in children 13- 15-years-old (p < 0.05). Microsporum gypseum was the only agent identified in the environmental samples, and was also found in one of the cases of tinea pedis. CONCLUSION The results of this study demonstrate a low prevalence of tinea capitis and tinea unguium in school children of Barcelona. On the contrary, high prevalence of dermatophytes in feet was found. It highlights the high prevalence of healthy carriers of dermatophytes in feet.
Machine Learning Tips and Tricks for Power Line Communications
A great deal of attention has been recently given to Machine Learning (ML) techniques in many different application fields. This paper provides a vision of what ML can do in Power Line Communications (PLC). We first and briefly describe classical formulations of the ML, and distinguish deterministic from statistical learning models with relevance to communications. We then discuss ML applications in PLC for each layer, namely, for characterization and modeling, for the development of physical layer algorithms, for media access control and networking. Finally, other applications of the PLC that can benefit from the usage of ML, as grid diagnostics, are analyzed. Illustrative numerical examples are reported to serve the purpose of validating the ideas and motivate future research endeavors in this stimulating signal/data processing field.
SAFER: System-level Architecture for Failure Evasion in Real-time Applications
Recent trends towards increasing complexity in distributed embedded real-time systems pose challenges in designing and implementing a reliable system such as a self-driving car. The conventional way of improving reliability is to use redundant hardware to replicate the whole (sub)system. Although hardware replication has been widely deployed in hard real-time systems such as avionics, space shuttles and nuclear power plants, it is significantly less attractive to many applications because the amount of necessary hardware multiplies as the size of the system increases. The growing needs of flexible system design are also not consistent with hardware replication techniques. To address the needs of dependability through redundancy operating in real-time, we propose a layer called SAFER(System-level Architecture for Failure Evasion in Real-time applications) to incorporate configurable task-level fault-tolerance features to tolerate fail-stop processor and task failures for distributed embedded real-time systems. To detect such failures, SAFER monitors the health status and state information of each task and broadcasts the information. When a failure is detected using either time-based failure detection or event-based failure detection, SAFER reconfigures the system to retain the functionality of the whole system. We provide a formal analysis of the worst-case timing behaviors of SAFER features. We also describe the modeling of a system equipped with SAFER to analyze timing characteristics through a model-based design tool called SysWeaver. SAFER has been implemented on Ubuntu 10.04 LTS and deployed on Boss, an award-winning autonomous vehicle developed at Carnegie Mellon University. We show various measurements using simulation scenarios used during the 2007 DARPA Urban Challenge. Finally, we present a case study of failure recovery by SAFER when node failures are injected.
Constructing Probability Boxes and Dempster-Shafer Structures
This report summarizes a variety of the most useful and commonly applied methods for obtaining Dempster-Shafer structures, and their mathematical kin probability boxes, from empirical information or theoretical knowledge. The report includes a review of the aggregation methods for handling agreement and conflict when multiple such objects are obtained from different sources. * The work described in this report was performed for Sandia National Laboratories under Contract No. 19094
Heterogeneous integration of microscale compound semiconductor devices by micro-transfer-printing
Integrating microscale electronic devices onto non-native substrates enables new kinds of products with desirable functionalities and cost structures that are inaccessible by conventional means. Micro assembly technologies are the practical ways to make such microscale heterogeneous device combinations possible. Elastomer stamp micro-transferprinting technology (μTP) is a widely-demonstrated form of micro assembly, having demonstrated applicability in optical communications, magnetic storage, concentrator photovoltaics and display technologies. Here we describe new experiments designed to assess the useful lifetime of the viscoelastic elastomer transfer stamp, and also describe the methodology and results for heterogeneous integration of microscale compound semiconductor devices onto non-native substrates using μTP.
Antipsychotic-induced type 2 diabetes: Evidence from a large health plan database
Case evidence suggests that some of the atypical antipsychotics may induce type 2 diabetes. The objective of this study was to evaluate the association of antipsychotic treatment with type 2 diabetes in a large health plan database. Claims data for patients with psychosis within a health plan of nearly 2 million members were analyzed using logistic regression. Frequencies of newly treated type 2 diabetes in patients untreated with antipsychotics and among patients treated with quetiapine, risperidone, olanzapine, and conventional antipsychotics were compared. Based on exposure measured in months of antipsychotic treatment, quetiapine and risperidone patients had estimated odds of receiving treatment for type 2 diabetes that were lower than those of patients untreated with antipsychotics (not statistically significant); patients treated with conventional antipsychotics had estimated odds that were virtually equivalent to those of patients untreated with antipsychotics; olanzapine alone had odds that were significantly greater than those of patients untreated with antipsychotics (P = 0.0247). Odds ratios based on 8 months of screening for pre-existing type 2 diabetes and assuming 12 months of antipsychotic treatment were: risperidone = 0.660 (95% CI 0.311-1.408); olanzapine = 1.426 (95% CI 1.046-1.955); quetiapine = 0.976 (95% CI 0.422-2.271); and conventional antipsychotics = 1.049 (95% CI 0.688-1.613). Case reports, prospective trials, and other retrospective studies have increasingly implicated olanzapine and clozapine as causing or exacerbating type 2 diabetes. Few have implicated risperidone while evidence on quetiapine has been limited. This study supports earlier findings on risperidone versus olanzapine and builds evidence on quetiapine. Additional studies are needed to evaluate the association of antipsychotic treatment with type 2 diabetes.
No-truncation approach to cosmic microwave background anisotropies
We offer a method of calculating the source term in the line-of-sight integral for cosmic microwave background anisotropies without using a truncated partial-wave expansion in the Boltzmann hierarchy.
BOLD Features to Detect Texture-less Objects
Object detection in images withstanding significant clutter and occlusion is still a challenging task whenever the object surface is characterized by poor informative content. We propose to tackle this problem by a compact and distinctive representation of groups of neighboring line segments aggregated over limited spatial supports and invariant to rotation, translation and scale changes. Peculiarly, our proposal allows for leveraging on the inherent strengths of descriptor-based approaches, i.e. robustness to occlusion and clutter and scalability with respect to the size of the model library, also when dealing with scarcely textured objects.
On the inevitability of death.
It was one of those weeks when it was impossible to deny the inevitability of death. Like many of us who work in palliative care, I find it necessary to protect myself from time to time by denying death, in subtle ways, so as not to be overwhelmed by death and what many of us refer to as death anxiety or death terror. But this past week was just too overwhelming. The constancy and rapidity of a series of confrontations with death shared with patients combined with a series of deaths of close friends and family forced me to look away. I could not “stare at the sun,” as Yalom (2008) wrote about the impossibility of confronting and contemplating death for too prolonged and sustained a period of time. It had been brewing for some time, but last week was the tipping point. My ability to grapple with my own death anxiety, while working day in and day out with patients who were in despair about their mortality and the “nearness” and “reality” of their deaths from cancer, was being challenged on a regular basis. Age milestones, family milestones, and work milestones all conspired to penetrate any defenses I might have employed to deal with aging and the much-too-rapid passage of time. Even the deaths of celebrities who made up the cultural context of my life shook me: Robin Williams, Garry Shandling, and Mary Tyler Moore. How could they have died? I remember turning, one day recently, to my son and half-jokingly saying out loud, “If Don Rickles [a famous American comedian] can die, that means ANYBODY can die!” That revelation included an evaluation of the reality of my own fate and the stark inescapable reality of my own mortality. I also can’t underestimate the effect of the very recent death of one of my heroes and a pioneer in death-and-dying studies. Avery Weisman, a leading Harvard psychiatrist and director of the Omega Project with Bill Worden in the 1960s, died at the age of 103. Avery taught us about “middle knowledge,” a concept related to coping with death anxiety (Weisman, 1972). The concept of middle knowledge explicates the fact that denial of death is complex and often involves two simultaneously and opposite views of the inevitability of death. We can deny death and minimize the bleakness of one’s prognosis with a terminal illness while simultaneously making plans for that death by completing a will, arranging burial plots, and planning a funeral service. This concept of middle knowledge was important for several reasons: it suggested the benefits of denial in preventing us from being overwhelmed by death terror, thus allowing us to assimilate and even accommodate the reality of our deaths at a manageable pace, and it suggested that human nature and biology utilize denial in complex ways that are not uniformly detrimental to the process of dying.
Learning filter banks within a deep neural network framework
Mel-filter banks are commonly used in speech recognition, as they are motivated from theory related to speech production and perception. While features derived from mel-filter banks are quite popular, we argue that this filter bank is not really an appropriate choice as it is not learned for the objective at hand, i.e. speech recognition. In this paper, we explore replacing the filter bank with a filter bank layer that is learned jointly with the rest of a deep neural network. Thus, the filter bank is learned to minimize cross-entropy, which is more closely tied to the speech recognition objective. On a 50-hour English Broadcast News task, we show that we can achieve a 5% relative improvement in word error rate (WER) using the filter bank learning approach, compared to having a fixed set of filters.
Introducing the Arabic WordNet project
Arabic is the official language of hundreds of millions of people in twenty Middle East and northern African countries, and is the religious language of all Muslims of various ethnicities around the world. Surprisingly little has been done in the field of computerised language and lexical resources. It is therefore motivating to develop an Arabic (WordNet) lexical resource that discovers the richness of Arabic as described in Elkateb (2005). This paper describes our approach towards building a lexical resource in Standard Arabic. Arabic WordNet (AWN) will be based on the design and contents of the universally accepted Princeton WordNet (PWN) and will be mappable straightforwardly onto PWN 2.0 and EuroWordNet (EWN), enabling translation on the lexical level to English and dozens of other languages. Several tools specific to this task will be developed. AWN will be a linguistic resource with a deep formal semantic foundation. Besides the standard wordnet representation of senses, word meanings are defined with a machine understandable semantics in first order logic. The basis for this semantics is the Suggested Upper Merged Ontology (SUMO) and its associated domain ontologies. We will greatly extend the ontology and its set of mappings to provide formal terms and definitions equivalent to each synset.
3D printed tactile picture books for children with visual impairments: a design probe
Young children with visual impairments greatly benefit from tactile graphics (illustrations, images, puzzles, objects) during their learning processes. In this paper we present insight about using a 3D printed tactile picture book as a design probe. This has allowed us to identify and engage stakeholders in our research on improving the technical and human processes required for creating 3D printed tactile pictures, and cultivate a community of practice around these processes. We also contribute insight about how our inperson and digital methods of interacting with teachers, parents, and other professionals dedicated to supporting children with visual impairments contributes to research practices.
The Role of Review Arousal in Online Reviews: Insights from EEG Data
This paper examines the effects of review arousal on perceived helpfulness of online reviews, and on consumers’ emotional responses elicited by the reviews. Drawing on emotion theories in psychology and neuroscience, we focus on four emotions – anger, anxiety, excitement, and enjoyment that are common in the context of online reviews. The effects of the four emotions embedded in online reviews were examined using a controlled experiment. Our preliminary results show that reviews embedded with the four emotions (arousing reviews) are perceived to be more helpful than reviews without the emotions embedded (non-arousing reviews). However, reviews embedded with anxiety and enjoyment (low-arousal reviews) are perceived to be more helpfulness that reviews embedded with anger and excitement (high-arousal reviews). Furthermore, compared to reviews embedded with anger, reviews embedded with anxiety are associated with a higher EEG activity that is generally linked to negative emotions. The results suggest a non-linear relationship between review arousal and perceived helpfulness, which can be explained by the consumers’ emotional responses elicited by the reviews.
Unsupervised Interpretable Pattern Discovery in Time Series Using Autoencoders
We study the use of feed-forward convolutional neural networks for the unsupervised problem of mining recurrent temporal patterns mixed in multivariate time series. Traditional convolutional autoencoders lack interpretability for two main reasons: the number of patterns corresponds to the manually-fixed number of convolution filters, and the patterns are often redundant and correlated. To recover clean patterns, we introduce different elements in the architecture, including an adaptive rectified linear unit function that improves patterns interpretability, and a group-lasso regularizer that helps automatically finding the relevant number of patterns. We illustrate the necessity of these elements on synthetic data and real data in the context of activity mining in videos.
Synthesis of Forgiving Data Extractors
We address the problem of synthesizing a robust data-extractor from a family of websites that contain the same kind of information. This problem is common when trying to aggregate information from many web sites, for example, when extracting information for a price-comparison site. Given a set of example annotated web pages from multiple sites in a family, our goal is to synthesize a robust data extractor that performs well on all sites in the family (not only on the provided example pages). The main challenge is the need to trade off precision for generality and robustness. Our key contribution is the introduction of forgiving extractors that dynamically adjust their precision to handle structural changes, without sacrificing precision on the training set. Our approach uses decision tree learning to create a generalized extractor and converts it into a forgiving extractor, inthe form of an XPath query. The forgiving extractor captures a series of pruned decision trees with monotonically decreasing precision, and monotonically increasing recall, and dynamically adjusts precision to guarantee sufficient recall. We have implemented our approach in a tool called TREEX and applied it to synthesize extractors for real-world large scale web sites. We evaluate the robustness and generality of the forgiving extractors by evaluating their precision and recall on: (i) different pages from sites in the training set (ii) pages from different versions of sites in the training set (iii) pages from different (unseen) sites. We compare the results of our synthesized extractor to those of classifier-based extractors, and pattern-based extractors, and show that TREEX significantly improves extraction accuracy.
Predict Stock Price with Financial News Based on Recurrent Convolutional Neural Networks
People have been interested in making profits from financial stock market prediction. However, stock market forecast has always been a challenging problem because of its uncertainty and volatility. We take a different approach by a model called recurrent convolutional neural networks (RCN) that combines the advantages of convolutions, sequence modeling, word embedding for stock price analysis and information extraction from financial news. We then combine RCN with technical analysis indicators to predict stock price. The results show that the technical analysis model combining with RCN performs better than the technical analysis alone. Besides, the prediction error of RCN is lower than that of Long-short term memory networks.
A task-level adaptive MapReduce framework for real-time streaming data in healthcare applications
Healthcare scientific applications, such as body area network, require of deploying hundreds of interconnected sensors to monitor the health status of a host. One of the biggest challenges is the streaming data collected by all those sensors, which needs to be processed in real time. Follow-up data analysis would normally involve moving the collected big data to a cloud data center for status reporting and record tracking purpose. Therefore, an efficient cloud platform with very elastic scaling capacity is needed to support such kind of real time streaming data applications. The current cloud platform either lacks of such a module to process streaming data, or scales in regard to coarse-grained compute nodes. In this paper, we propose a task-level adaptive MapReduce framework. This framework extends the generic MapReduce architecture by designing each Map and Reduce task as a consistent running loop daemon. The beauty of this new framework is the scaling capability being designed at the Map and Task level, rather than being scaled from the compute-node level. This strategy is capable of not only scaling up and down in real time, but also leading to effective use of compute resources in cloud data center. As a first step towards implementing this framework in real cloud, we developed a simulator that captures workload strength, and provisions the amount of Map and Reduce tasks just in need and in real time. To further enhance the framework, we applied two streaming data workload prediction methods, smoothing and Kalman filter, to estimate the unknown workload characteristics. We see 63.1% performance improvement by using the Kalman filter method to predict the workload. We also use real streaming data workload trace to test the framework. Experimental results show that this framework schedules the Map and Reduce tasks very efficiently, as the streaming data changes its arrival rate. © 2014 Elsevier B.V. All rights reserved. ∗ Corresponding author at: Kavli Institute for Astrophysics and Space Research, Massachusetts Institute of Technology, Cambridge, MA 02139, USA. Tel.: +1
Segmentation of Image Data from Complex Organotypic 3D Models of Cancer Tissues with Markov Random Fields
Organotypic, three dimensional (3D) cell culture models of epithelial tumour types such as prostate cancer recapitulate key aspects of the architecture and histology of solid cancers. Morphometric analysis of multicellular 3D organoids is particularly important when additional components such as the extracellular matrix and tumour microenvironment are included in the model. The complexity of such models has so far limited their successful implementation. There is a great need for automatic, accurate and robust image segmentation tools to facilitate the analysis of such biologically relevant 3D cell culture models. We present a segmentation method based on Markov random fields (MRFs) and illustrate our method using 3D stack image data from an organotypic 3D model of prostate cancer cells co-cultured with cancer-associated fibroblasts (CAFs). The 3D segmentation output suggests that these cell types are in physical contact with each other within the model, which has important implications for tumour biology. Segmentation performance is quantified using ground truth labels and we show how each step of our method increases segmentation accuracy. We provide the ground truth labels along with the image data and code. Using independent image data we show that our segmentation method is also more generally applicable to other types of cellular microscopy and not only limited to fluorescence microscopy.
Employees' Intended Information Security Behaviour in Real Estate Organisations: a Protection Motivation Perspective
Due to the amount of identifiable customer personal, financial and other information stored by real estate organisations in their information systems, the threats are real. Challenges to secure the organisational (and customer) data are compounded by the nature of the industry (e.g. the core business and employees’ qualifications are non-security-related). To investigate the factors that influence real estate employees’ intended information security behaviour, we propose a research model based on Protection Motivation Theory (PMT) where we also include previous incidents as constituting threat appraisal components. Our findings from a survey of 105 real estate business employees in Australia reveal that perceived vulnerability, perceived severity, previous incidents, and response efficacy have a positive impact on real estate employees’ information security behavioural intention whereas self-efficacy does not. Our study also determines that response cost has a negative significant effect on intended information security behaviour.
Basic principles of ROC analysis.
The limitations of diagnostic "accuracy" as a measure of decision performance require introduction of the concepts of the "sensitivity" and "specificity" of a diagnostic test. These measures and the related indices, "true positive fraction" and "false positive fraction," are more meaningful than "accuracy," yet do not provide a unique description of diagnostic performance because they depend on the arbitrary selection of a decision threshold. The receiver operating characteristic (ROC) curve is shown to be a simple yet complete empirical description of this decision threshold effect, indicating all possible combinations of the relative frequencies of the various kinds of correct and incorrect decisions. Practical experimental techniques for measuring ROC curves are described, and the issues of case selection and curve-fitting are discussed briefly. Possible generalizations of conventional ROC analysis to account for decision performance in complex diagnostic tasks are indicated. ROC analysis is shown to be related in a direct and natural way to cost/benefit analysis of diagnostic decision making. The concepts of "average diagnostic cost" and "average net benefit" are developed and used to identify the optimal compromise among various kinds of diagnostic error. Finally, the way in which ROC analysis can be employed to optimize diagnostic strategies is suggested.
Turbo denoising for mobile photographic applications
We propose a new denoising algorithm for camera pipelines and other photographic applications. We aim for a scheme that is (1) fast enough to be practical even for mobile devices, and (2) handles the realistic content dependent noise in real camera captures. Our scheme consists of a simple two-stage non-linear processing. We introduce a new form of boosting/blending which proves to be very effective in restoring the details lost in the first denoising stage. We also employ IIR filtering to significantly reduce the computation time. Further, we incorporate a novel noise model to address the content dependent noise. For realistic camera noise, our results are competitive with BM3D, but with nearly 400 times speedup.
DASC: Robust Dense Descriptor for Multi-Modal and Multi-Spectral Correspondence Estimation
Establishing dense correspondences between multiple images is a fundamental task in many applications. However, finding a reliable correspondence between multi-modal or multi-spectral images still remains unsolved due to their challenging photometric and geometric variations. In this paper, we propose a novel dense descriptor, called dense adaptive self-correlation (DASC), to estimate dense multi-modal and multi-spectral correspondences. Based on an observation that self-similarity existing within images is robust to imaging modality variations, we define the descriptor with a series of an adaptive self-correlation similarity measure between patches sampled by a randomized receptive field pooling, in which a sampling pattern is obtained using a discriminative learning. The computational redundancy of dense descriptors is dramatically reduced by applying fast edge-aware filtering. Furthermore, in order to address geometric variations including scale and rotation, we propose a geometry-invariant DASC (GI-DASC) descriptor that effectively leverages the DASC through a superpixel-based representation. For a quantitative evaluation of the GI-DASC, we build a novel multi-modal benchmark as varying photometric and geometric conditions. Experimental results demonstrate the outstanding performance of the DASC and GI-DASC in many cases of dense multi-modal and multi-spectral correspondences.
VT: An Expert Elevator Designer That Uses Knowledge-Based Backtracking
system for handling the design of elevator systems that is currently in use at Westinghouse Elevator Company Although VT tries to postpone each decision in creating a design until all information that constrains the decision is known, for many decisions this postponement is not possible In these cases, VT uses the strategy of constructing a plausible approximation and successively refining it VT uses domain-specific knowledge to guide its backtracking search for successful refinements The VT architecture provides the basis for a knowledge representation that is used by SALT, an automated knowledge-acquisition tool SALT was used to build VT and provides an analysis of VT’s knowledge base to assess its potential for convergence on a solution VT: An Expert Elevator Designer That Uses Knowledge-Based Backtracking Sandra Marcus, Jeffrey Stout, John McDermott
Nonnegative Matrix Factorization With Regularizations
Matrix factorization techniques have been frequently applied in many fields. Among them, nonnegative matrix factorization (NMF) has received considerable attention for it aims to find a parts-based, linear representations of nonnegative data. Recently, many researchers propose various manifold learning algorithms to enhance learning performance by considering the local manifold smoothness assumption. However, NMF does not consider the geometrical structure of data and the local manifold smoothness does not directly ensure the representations of the data point with different labels being dissimilar. In order to find a better representation of data, we propose a novel matrix decomposition method, called nonnegative matrix factorization with Regularizations (RNMF), which incorporates three appropriate regularizations: nonnegative matrix factorization, the local manifold smoothness and a rank constraint. The representations of data learned by RNMF tend to be discriminative and sparse. By learning a Mahalanobis distance space based on labeled data, RNMF can also be extended to a semi-supervised algorithm (semi-RNMF) which has an amazing improvement on clustering performance. Our empirical study shows encouraging results of the proposed algorithm in comparison to the state-of-the-art algorithms on real-world problems.
Some justifications about the learning by disturbin g strategy
Intelligent Tutoring Systems (ITS) are evolving towards a more cooperative relationship between the system and the student. More and more, learning is considered as a constructive process rather than a simple transfer of knowledge. This trend has brought to light new cooperative tutoring strategies. One of these tutoring strategies, the learning companion, designed to overcome some of the limitations of the classical tutoring model, involves a student and two simulated participants: a tutor and another student. More recently a new strategy, learning by disturbing, has been proposed. In this strategy, the simulated student is a troublemaker whose role is to deliberately disturb the human student. This article describes the learning by disturbing strategy by contrasting it with the learning companion strategy. In addition, some links are drawn between this new strategy and the psychology of learning, in particular the cognitive dissonance theory.
The Reformation in historical thought
The scope of this book includes the whole field of historical writing on the Reformation, from Luther to the present.
Assessment of atrial electromechanical coupling characteristics in patients with ankylosing spondylitis.
OBJECTIVE The aim of this study was to evaluate atrial conduction abnormalities obtained by Doppler tissue imaging (DTI) and electrocardiogram analysis in ankylosing spondylitis (AS) patients. METHODS A total of 40 patients with AS (22 males /18 females, 37.82 +/- 10.22 years), and 42 controls (22 males/20 females, 35.74 +/- 9.98 years) were included. Systolic and diastolic left ventricular (LV) functions were measured by using conventional echocardiography and DTI. Interatrial and intraatrial electromechanical coupling (PA) intervals were measured with DTI. P-wave dispersion (PD) was calculated from the 12-lead electrocardiogram. RESULTS Atrial electromechanical coupling at the left lateral mitral annulus (PA lateral) was significantly delayed in AS patients (61.65 +/- 7.81 vs 53.69 +/- 6.75 ms, P < 0.0001). Interatrial (PA lateral - PA tricuspid), intraatrial electromechanical coupling intervals (PA septum - PA tricuspid), maximum P-wave (Pmax) duration, and PD were significantly longer in AS patients (23.50 +/- 7.08 vs 14.76 +/- 5.69 ms, P < 0.0001; 5.08 +/- 5.24 vs 2.12 +/- 2.09 ms, P = 0.001; 103.85 +/- 6.10 vs 97.52 +/- 6.79 ms, P < 0.0001; and 48.65 +/- 6.17 vs 40.98 +/- 5.37 ms, P < 0.0001, respectively). Reflecting LV diastolic function mitral A-wave and E/A, mitral E-wave deceleration time (DT), Am and Em/Am were significantly different between the groups (P < 0.05). We found a significant correlation between interatrial electromechanical coupling interval with PD (r = 0.536, P < 0.01). Interatrial electromechanical coupling interval was positively correlated with DT (r = 0.422, P < 0.01) and inversely correlated with E/A (r =-0.263, P < 0.05) and Em/Am (r =-0.263, P < 0.05). CONCLUSION This study shows that atrial electromechanical coupling intervals and PD are delayed, and LV diastolic functions are impaired in AS patients.
Decomposition and oligomerization of 2,3-naphthyridine under high-pressure and high-temperature conditions
The chemical reaction of 2,3-naphthyridine, a nitrogen-containing aromatic compound, was investigated at pressures ranging from 0.5 to 1.5 GPa and temperatures from 473 to 573 K. A distinct decrease in the amount of residual 2,3-naphthyridine was observed in the samples recovered after reaction at ˃523 K at 0.5 and 1.0 GPa, and ˃548 K at 1.5 GPa. The formation of o-xylene and o-tolunitrile accompanied a decreasing N/C ratio of the reaction products, indicating decomposition of the aromatic ring and release of nitrogen. Precise analysis of the reaction products indicated the oligomerization of decomposed products with the residual 2,3-naphthyridine to form larger molecules up to 7mers. Nitrogen in the aromatic ring accelerated reactions to decompose the molecule and to oligomerize at lower temperatures than those typically reported for aromatic hydrocarbon oligomerization. The major reaction mechanism was similar between 0.5 and 1.5 GPa, although larger products preferentially formed in the samples at higher pressure.
Effective Use of Word Order for Text Categorization with Convolutional Neural Networks
Convolutional neural network (CNN) is a neural network that can make use of the internal structure of data such as the 2D structure of image data. This paper studies CNN on text categorization to exploit the 1D structure (namely, word order) of text data for accurate prediction. Instead of using low-dimensional word vectors as input as is often done, we directly apply CNN to high-dimensional text data, which leads to directly learning embedding of small text regions for use in classification. In addition to a straightforward adaptation of CNN from image to text, a simple but new variation which employs bag-ofword conversion in the convolution layer is proposed. An extension to combine multiple convolution layers is also explored for higher accuracy. The experiments demonstrate the effectiveness of our approach in comparison with state-of-the-art methods.
Factors affecting the accurate placement of percutaneous pedicle screws during minimally invasive transforaminal lumbar interbody fusion
We retrospectively evaluated 488 percutaneous pedicle screws in 110 consecutive patients that had undergone minimally invasive transforaminal lumbar interbody fusion (MITLIF) to determine the incidence of pedicle screw misplacement and its relevant risk factors. Screw placements were classified based on postoperative computed tomographic findings as “correct”, “cortical encroachment” or as “frank penetration”. Age, gender, body mass index, bone mineral density, diagnosis, operation time, estimated blood loss (EBL), level of fusion, surgeon’s position, spinal alignment, quality/quantity of multifidus muscle, and depth to screw entry point were considered to be demographic and anatomical variables capable of affecting pedicle screw placement. Pedicle dimensions, facet joint arthritis, screw location (ipsilateral or contralateral), screw length, screw diameter, and screw trajectory angle were regarded as screw-related variables. Logistic regression analysis was conducted to examine relations between these variables and the correctness of screw placement. The incidence of cortical encroachment was 12.5% (61 screws), and frank penetration was found for 54 (11.1%) screws. Two patients (0.4%) with medial penetration underwent revision for unbearable radicular pain and foot drop, respectively. The odds ratios of significant risk factors for pedicle screw misplacement were 3.373 (95% CI 1.095–10.391) for obesity, 1.141 (95% CI 1.024–1.271) for pedicle convergent angle, 1.013 (95% CI 1.006–1.065) for EBL >400 cc, and 1.003 (95% CI 1.000–1.006) for cross-sectional area of multifidus muscle. Although percutaneous insertion of pedicle screws was performed safely during MITLIF, several risk factors should be considered to improve placement accuracy.
Components of Spatial Intelligence
This chapter identifies two basic components of spatial intelligence, based on analyses of performance on tests of spatial ability and on complex spatial thinking tasks in domains such as mechanics, chemistry, medicine, and meteorology. The first component is flexible strategy choice between mental imagery (or mental simulation more generally) and more analytic forms of thinking. Research reviewed here suggests that mental simulation is an important strategy in spatial thinking, but that it is augmented by more analytic strategies such as task decomposition and rule-based reasoning. The second is meta-representational competence [diSessa, A. A. (2004). Metarepresentation: Native competence and targets for instruction. Cognition and Instruction, 22, 293–331], which encompasses ability to choose the optimal external representation for a task and to use novel external representations productively. Research on this aspect of spatial intelligence reveals large individual differences in ability to adaptively choose and use external visual–spatial representations for a task. This research suggests that we should not just think of interactive external visualizations as ways of augmenting spatial intelligence, but also consider the types of intelligence that are required for their use. Psychology of Learning and Motivation, Volume 52 # 2010 Elsevier Inc. ISSN 0079-7421, DOI: 10.1016/S0079-7421(10)52007-3 All rights reserved.
A New Class of Discretization Methods for the Solution of Linear Differential-Algebraic Equations with Variable Coefficients
We discuss new discretization methods for linear differential–algebraic equations with variable coefficients. We introduce numerical methods to compute the local invariants of such differential–algebraic equations that were introduced by the authors in a previous paper. Using these quantitities we are able to determine numerically global invariances like the strangeness index, which generalizes the differentiation index for differential–algebraic equations that in particular include undetermined solution components. Based on these methods we then obtain regularization schemes, which allow us to employ general solution methods. The new methods are tested on a number of numerical examples.
From Collision To Exploitation: Unleashing Use-After-Free Vulnerabilities in Linux Kernel
Since vulnerabilities in Linux kernel are on the increase, attackers have turned their interests into related exploitation techniques. However, compared with numerous researches on exploiting use-after-free vulnerabilities in the user applications, few efforts studied how to exploit use-after-free vulnerabilities in Linux kernel due to the difficulties that mainly come from the uncertainty of the kernel memory layout. Without specific information leakage, attackers could only conduct a blind memory overwriting strategy trying to corrupt the critical part of the kernel, for which the success rate is negligible. In this work, we present a novel memory collision strategy to exploit the use-after-free vulnerabilities in Linux kernel reliably. The insight of our exploit strategy is that a probabilistic memory collision can be constructed according to the widely deployed kernel memory reuse mechanisms, which significantly increases the success rate of the attack. Based on this insight, we present two practical memory collision attacks: An object-based attack that leverages the memory recycling mechanism of the kernel allocator to achieve freed vulnerable object covering, and a physmap-based attack that takes advantage of the overlap between the physmap and the SLAB caches to achieve a more flexible memory manipulation. Our proposed attacks are universal for various Linux kernels of different architectures and could successfully exploit systems with use-after-free vulnerabilities in kernel. Particularly, we achieve privilege escalation on various popular Android devices (kernel version>=4.3) including those with 64-bit processors by exploiting the CVE-2015-3636 use-after-free vulnerability in Linux kernel. To our knowledge, this is the first generic kernel exploit for the latest version of Android. Finally, to defend this kind of memory collision, we propose two corresponding mitigation schemes.
Weight loss associated with a daily intake of three apples or three pears among overweight women.
OBJECTIVE We investigated the effect of fruit intake on body weight change. METHODS Hypercholesterolemic, overweight (body mass index > 25 kg/m2), and non-smoking women, 30 to 50 y of age, were randomized to receive, free of charge, one of three dietary supplements: apples, pears, or oat cookies. Women were instructed to eat one supplement three times a day in a total of six meals a day. Participants (411 women) were recruited at a primary care center of the State University of Rio de Janeiro, Brazil. Fifty-one women had fasting blood cholesterol levels greater than 6.2 mM/L (240 mg/dL) and 49 were randomized. Subjects were instructed by a dietitian to eat a diet (55% of energy from carbohydrate, 15% from protein, and 30% from fat) to encourage weight reduction at the rate of 1 kg/mo. RESULTS After 12 wk of follow-up, the fruit group lost 1.22 kg (95% confidence interval = 0.44-1.85), whereas the oat group had a non-significant weight loss of 0.88 kg (0.37-2.13). The difference between the two groups was statistically significant (P = 0.004). To explore further the body weight loss associated with fruit intake, we measured the ratio of glucose to insulin. A significantly greater decrease of blood glucose was observed among those who had eaten fruits compared with those who had eaten oat cookies, but the glucose:insulin ratio was not statistically different from baseline to follow-up. Adherence to the diet was high, as indicated by changes in serum triacylglycerols, total cholesterol, and reported fruit intake. Fruit intake in the oat group throughout treatment was minimal. CONCLUSIONS Intake of fruits may contribute to weight loss.
A RESOURCE-BASED APPROACH TO PERFORMANCE AND COMPETITION : An Overview of the Connections between Resources and Competition
This paper extends the resource-based view of the firm to give an overview of the connections between resources and competition. Specifically, it develops a conceptual framework explaining competitive advantage and performance that incorporate the resource-based view of the firm and Porter’s approach to competitive environment. On the basis of this framework, it shows how firms compete for resources and may use their resources to compete.
The Basic AI Drives
One might imagine that AI systems with harmless goals will be harmless. This paper instead shows that intelligent systems will need to be carefully designed to prevent them from behaving in harmful ways. We identify a number of “drives” that will appear in sufficiently advanced AI systems of any design. We call them drives because they are tendencies which will be present unless explicitly counteracted. We start by showing that goal-seeking systems will have drives to model their own operation and to improve themselves. We then show that self-improving systems will be driven to clarify their goals and represent them as economic utility functions. They will also strive for their actions to approximate rational economic behavior. This will lead almost all systems to protect their utility functions from modification and their utility measurement systems from corruption. We also discuss some exceptional systems which will want to modify their utility functions. We next discuss the drive toward self-protection which causes systems try to prevent themselves from being harmed. Finally we examine drives toward the acquisition of resources and toward their efficient utilization. We end with a discussion of how to incorporate these insights in designing intelligent technology which will lead to a positive future for humanity.
Automatic Wrinkle Detection Using Hybrid Hessian Filter
Aging as a natural phenomenon affects different parts of the human body under the influence of various biological and environmental factors. The most pronounced changes that occur on the face is the appearance of wrinkles, which are the focus of this research. Accurate wrinkle detection is an important task in face analysis. Some have been proposed in the literature, but the poor localization limits the performance of wrinkle detection. It will lead to false wrinkle detection and consequently affect the processes such as age estimation and clinician score assessment. Therefore, we propose a hybrid Hessian filter (HHF) to cope with the identified problem. HHF is composed of the directional gradient and Hessian matrix. The proposed filter is conceptually simple, however, it significantly increases the true wrinkle localization when compared with the conventional methods. In the experimental setup, three coders have been instructed to annotate the wrinkle of 2D forehead image manually. The inter-reliability among three coders is 93% of Jaccard similarity index (JSI). In comparison to the state-of-the-art Cula method (CLM) and Frangi filter, HHF yielded the best result with a mean JSI of 75.67%. We noticed that the proposed method is capable of detecting the medium to coarse wrinkle but not the fine wrinkle. Although there is a gap between human annotation and automated detection, this work demonstrates that HHF is a remarkably strong filter for wrinkle detection. From the experimental results, we believe that our findings are notable in terms of the JSI.
Argumentation Mining in Persuasive Essays and Scientific Articles from the Discourse Structure Perspective
This paper provides an analysis of some argumentation in a biomedical genetics research article as a step towards developing a corpus of articles annotated to support research on argumentation. We present a specification of several argumentation schemes and inter-argument relationships to be annotated.
Una aproximación al uso de word embeddings en una tarea de similitud de textos en español
In this paper we show how a vector representation of words based on word embeddings can help to improve the results in tasks focused on the semantic similarity of texts. Thus we have experimented with two methods that rely on the vector representation of words to calculate the degree of similarity of two texts, one based on the aggregation of vectors and the other one based on the calculation of alignments. The alignment method relies on the similarity of word vectors to determine the semantic link between them. The aggregation method allows us to construct vector representations of the texts from the individual vectors of each word. These representations are compared by means of two classic distance measures: Euclidean distance and cosine similarity. We have evaluated our systems with the corpus based on Wikipedia distributed in the competition of similarity of texts in Spanish of SemEval-2015. Our experiments show that the method based on the alignment of words performs much better, obtaining results that are very close to the best system at SemEval. The method based on vector representations of texts behaves substantially worse. However, this second approach seems to capture aspects of similarity not detected by the first one, as when the outputs of both systems are combined the results of the alignment method are surpassed, even exceeding the results of the best system at SemEval.
Generalized Simulated Annealing
We propose a new stochastic algorithm (generalized simulated annealing) for computationally finding the global minimum of a given (not necessarily convex) energy/cost function defined in a continuous D-dimensional space. This algorithm recovers, as particular cases, the so called classical (“Boltzmann machine”) and fast (“Cauchy machine”) simulated annealings, and can be quicker than both. Key-words: Simulated annealing; Nonconvex optimization; Gradient descent; Generalized Statistical Mechanics.
Human emotion and the uncanny valley: A GLM, MDS, and Isomap analysis of robot video ratings
The eerie feeling attributed to human-looking robots and animated characters may be a key factor in our perceptual and cognitive discrimination of the human and humanlike. This study applies regression, the generalized linear model (GLM), factor analysis, multidimensional scaling (MDS), and kernel isometric mapping (Isomap) to analyze ratings of 27 emotions of 18 moving figures whose appearance varies along a human likeness continuum. The results indicate (1) Attributions of eerie and creepy better capture our visceral reaction to an uncanny robot than strange. (2) Eerie and creepy are mainly associated with fear but also shocked, disgusted, and nervous. Strange is less strongly associated with emotion. (3) Thus, strange may be more cognitive, while eerie and creepy are more perceptual/emotional. (4) Human features increase ratings of human likeness. (5) Women are slightly more sensitive to eerie and creepy than men; and older people may be more willing to attribute human likeness to a robot despite its eeriness.
Retricoin: Bitcoin based on compact proofs of retrievability
Bitcoin [24] is a fully decentralized electronic cash system. The generation of the proof-of-work in Bitcoin requires large amount of computing resources. However, this huge amount of energy is wasted as one cannot make something useful out of it. In this paper, we propose a scheme called Retricoin which replaces the heavy computational proof-of-work of Bitcoin by proofs of retrievability that have practical benefits. To guarantee the availability of an important but large file, we distribute the segments of the file among the users in the Bitcoin network. Every user who wants to mine Bitcoins must store a considerable portion of this file and prove her storage to other peers in the network using proofs of retrievability. The file can be constructed at any point of time from the users storing their respective segments untampered. Retricoin is more efficient than the existing Permacoin scheme [23] in terms of storage overhead and network bandwidth required to broadcast the proof to the Bitcoin network. The verification time in our scheme is comparable to that of Permacoin and reasonable for all practical purposes. We also design an algorithm to let the miners in a group (or pool) mine collectively.
Learning to Learn from Weak Supervision by Full Supervision
In this paper, we propose a method for training neural networks when we have a large set of data with weak labels and a small amount of data with true labels. In our proposed model, we train two neural networks: a target network, the learner and a confidence network, the meta-learner. The target network is optimized to perform a given task and is trained using a large set of unlabeled data that are weakly annotated. We propose to control the magnitude of the gradient updates to the target network using the scores provided by the second confidence network, which is trained on a small amount of supervised data. Thus we avoid that the weight updates computed from noisy labels harm the quality of the target networkmodel.
Neural Collaborative Filtering
In recent years, deep neural networks have yielded immense success on speech recognition, computer vision and natural language processing. However, the exploration of deep neural networks on recommender systems has received relatively less scrutiny. In this work, we strive to develop techniques based on neural networks to tackle the key problem in recommendation — collaborative filtering — on the basis of implicit feedback. Although some recent work has employed deep learning for recommendation, they primarily used it to model auxiliary information, such as textual descriptions of items and acoustic features of musics. When it comes to model the key factor in collaborative filtering — the interaction between user and item features, they still resorted to matrix factorization and applied an inner product on the latent features of users and items. By replacing the inner product with a neural architecture that can learn an arbitrary function from data, we present a general framework named NCF, short for Neural networkbased Collaborative Filtering. NCF is generic and can express and generalize matrix factorization under its framework. To supercharge NCF modelling with non-linearities, we propose to leverage a multi-layer perceptron to learn the user–item interaction function. Extensive experiments on two real-world datasets show significant improvements of our proposed NCF framework over the state-of-the-art methods. Empirical evidence shows that using deeper layers of neural networks offers better recommendation performance.
Courses : Innovation in Education ?
Researchers have extensively chronicled the trends and challenges in higher education (Altbach et al. 2009). MOOCs appear to be as much about the collective grasping of universities’ leaders to bring higher education into the digital age as they are about a particular method of teaching. In this chapter, I won’t spend time commenting on the role of MOOCs in educational transformation or even why attention to this mode of delivering education has received unprecedented hype (rarely has higher education as a system responded as rapidly to a trend as it has responded to open online courses). Instead, this chapter details different MOOC models and the underlying pedagogy of each.
Supplementary (abuse) cards for the family relations test
The Bene–Anthony Family Relations Test is a potentially valuable clinical technique in the assessment of young children suspected of having been abused. Fourteen supplementary cards have been developed which offer the possibility of a narrow investigative focus, as their content points more directly to abusive behaviour. Used carefully, it is considered that these cards, within the context of the Family Relations Test, provide a young child with a non-threatening, non-directive and age-appropriate context within which to relate their experiences.
Strong higher order mutation-based test data generation
This paper introduces SHOM, a mutation-based test data generation approach that combines Dynamic Symbolic Execution and Search Based Software Testing. SHOM targets strong mutation adequacy and is capable of killing both first and higher order mutants. We report the results of an empirical study using 17 programs, including production industrial code from ABB and Daimler and open source code as well as previously studied subjects. SHOM achieved higher strong mutation adequacy than two recent mutation-based test data generation approaches, killing between 8% and 38% of those mutants left unkilled by the best performing previous approach.
A Search Engine for Mathematical Formulae
We present a search engine for mathematical formulae. The MathWebSearch system harvests the web for content representations (currently MathML and OpenMath) of formulae and indexes them with substitution tree indexing, a technique originally developed for accessing intermediate results in automated theorem provers. For querying, we present a generic language extension approach that allows constructing queries by minimally annotating existing representations. First experiments show that this architecture results in a scalable application.
Clinical implications of provocation tests for coronary artery spasm: safety, arrhythmic complications, and prognostic impact: multicentre registry study of the Japanese Coronary Spasm Association.
AIMS Provocation tests of coronary artery spasm are useful for the diagnosis of vasospastic angina (VSA). However, these tests are thought to have a potential risk of arrhythmic complications, including ventricular tachycardia (VT), ventricular fibrillation (VF), and brady-arrhythmias. We aimed to elucidate the safety and the clinical implications of the spasm provocation tests in the nationwide multicentre registry study by the Japanese Coronary Spasm Association. METHODS AND RESULTS A total of 1244 VSA patients (M/F, 938/306; median 66 years) who underwent the spasm provocation tests were enrolled from 47 institutes. The primary endpoint was defined as major adverse cardiac events (MACEs). The provocation tests were performed with either acetylcholine (ACh, 57%) or ergonovine (40%). During the provocation tests, VT/VF and brady-arrhythmias developed at a rate of 3.2 and 2.7%, respectively. Overall incidence of arrhythmic complications was 6.8%, a comparable incidence of those during spontaneous angina attack (7.0%). Multivariable logistic regression analysis demonstrated that diffuse right coronary artery spasm (P < 0.01) and the use of ACh (P < 0.05) had a significant correlation with provocation-related VT/VF. During the median follow-up of 32 months, 69 patients (5.5%) reached the primary endpoint. The multivariable Cox proportional hazard model revealed that mixed (focal plus diffuse) type multivessel spasm had an important association with MACEs (adjusted hazard ratio, 2.84; 95% confidence interval, 1.34-6.03; P < 0.01), whereas provocation-related arrhythmias did not. CONCLUSION The spasm provocation tests have an acceptable level of safety and the evaluation of spasm type may provide useful information for the risk prediction of VSA patients.
Describing Clothing by Semantic Attributes
Describing clothing appearance with semantic attributes is an appealing technique for many important applications. In this paper, we propose a fully automated system that is capable of generating a list of nameable attributes for clothes on human body in unconstrained images. We extract low-level features in a pose-adaptive manner, and combine complementary features for learning attribute classifiers. Mutual dependencies between the attributes are then explored by a Conditional Random Field to further improve the predictions from independent classifiers. We validate the performance of our system on a challenging clothing attribute dataset, and introduce a novel application of dressing style analysis that utilizes the semantic attributes produced by our system.
Reliability and validity of the online continuous performance test among young adults.
Continuous Performance Tests (CPTs) are used in research and clinical contexts to measure sustained attention and response inhibition. Reliability and validity of a new Online Continuous Performance Test (OCPT) was assessed. The OCPT is designed for delivery over the Internet, thereby opening new opportunities for research and clinical application in naturalistic settings. In Study 1, participants completed the OCPT twice over a 1-week period. One test was taken at home and one in the laboratory. Construct validity was assessed against a gold standard CPT measure. Results indicate acceptable reliability between the home- and laboratory-administered tests. Modest to high correlations were observed between the OCPT scales and the corresponding scales of the gold standard CPT. Study 2 examined whether the OCPT may discriminate participants with attention deficit hyperactivity disorder from healthy controls. Results revealed significantly higher rates of omission and commission errors and greater response time variability in participants with attention deficit hyperactivity disorder relative to healthy controls. These results support the reliability and validity of the OCPT and suggest that it may serve as an effective tool for the assessment of attention function in naturalistic settings.
Regulation of thyroid hormone sensitivity by differential expression of the thyroid hormone receptor during Xenopus metamorphosis.
During amphibian metamorphosis, a series of dynamic changes occur in a predetermined order. Hind limb morphogenesis begins in response to low levels of thyroid hormone (TH) in early prometamorphosis, but tail muscle cell death is delayed until climax, when TH levels are high. It takes about 20 days for tadpoles to grow from early prometamorphosis to climax. To study the molecular basis of the timing of tissue-specific transformations, we introduced thyroid hormone receptor (TR) expression constructs into tail muscle cells of Xenopus tadpoles. The TR-transfected tail muscle cells died upon exposure to a low level of thyroxine (T4). This cell death was suggested to be mediated by type 2 iodothyronine deiodinase (D2) that converts T4 to T3-the more active form of TH. D2 mRNA was induced in the TR-overexpressing cells by low levels of TH. D2 promoter contains a TH-response element (TRE) with a lower affinity for TR. These results show that the TR transfection confers the ability to respond to physiological concentrations of TH at early prometamorphosis to tail muscle cells through D2 activity and promotes TH signaling. We propose the positive feedback loop model to amplify the cell's ability to respond to low levels of T4.
Synergistic contribution of CD14 and HLA loci in the susceptibility to Buerger disease
Buerger disease (BD) is an occulusive vascular disease of unknown etiology. Although cigarette smoking is a well-known risk factor of BD, genetic factors may also play a role in the etiology. Because chronic bacterial infection such as oral periodontitis is suggested to be involved in the pathogenesis of BD, gene polymorphisms involved in the infectious immunity might be associated with BD as the genetic factor(s). We have previously reported that HLA-DRB1*1501 and B54 was associated with BD in Japanese. In this study, polymorphisms in HLA-DPB1, DRB1 and B were analyzed in 131 Japanese BD patients and 227 healthy controls. In addition, we investigated a functional promoter polymorphism, −260 C > T, of CD14 that is a main receptor of bacterial lipopolysaccharide. It was found that the frequencies of CD14 TT genotype [37.4 vs. 24.2%, P = 0.008 OR = 1.87, 95% confidence interval (CI); 1.18, 2.97], DRB1*1501 (34.4 vs. 13.2%, P c = 4.4 × 10−5, OR = 3.44, 95%CI; 2.06, 5.73) and DPB1*0501 (79.4 vs. 55.1%, P c = 4.7 × 10−5, OR = 3.14, 95%CI; 1.93, 5.11) were significantly higher in the patients than in the controls, demonstrating that at least three genetic markers were associated with BD. Stratification analyses of these associated markers suggested synergistic roles of the genetic factors. Odds ratios ranged from 4.72 to 12.57 in individuals carrying any two of these three markers. These findings suggested that the susceptibility to BD was in part controlled by genes involved in the innate and adaptive immunity.
Gérard Béaur, Phillip Schofield, Jean-Michel Chevet & María Teresa Pérez Picazo (dir.), Property Rights, Land Markets, and Economic Growth in the European Countryside (Thirteenth-Twentieth Centuries)
L’objectif de ce livre est d’operer une critique des theories institutionnalistes, tout particulierement le modele defini par Douglass North (1973, 1989), qui met l’accent sur l’importance des institutions pour le developpement economique ou le retard de l’agriculture. La disparition de l’ancien systeme d’acces a la terre et l’apparition de droits de propriete « parfaits » assortis d’institutions complementaires auraient ete les conditions necessaires pour l’essor d’un marche foncier libre et...
End-to-end optimization of goal-driven and visually grounded dialogue systems
End-to-end design of dialogue systems has recently become a popular research topic thanks to powerful tools such as encoder-decoder architectures for sequence-to-sequence learning. Yet, most current approaches cast human-machine dialogue management as a supervised learning problem, aiming at predicting the next utterance of a participant given the full history of the dialogue. This vision may fail to correctly render the planning problem inherent to dialogue as well as its contextual and grounded nature. In this paper, we introduce a Deep Reinforcement Learning method to optimize visually grounded task-oriented dialogues, based on the policy gradient algorithm. This approach is tested on the question generation task from the dataset GuessWhat?! containing 120k dialogues and provides encouraging results at solving both the problem of generating natural dialogues and the task of discovering a specific object in a complex image.
Dealing with the evaluation of supervised classification algorithms
Performance assessment of a learning method related to its prediction ability on independent data is extremely important in supervised classification. This process provides the information to evaluate the quality of a classification model and to choose the most appropriate technique to solve the specific supervised classification problem at hand. This paper aims to review the most important aspects of the evaluation process of supervised classification algorithms. Thus the overall evaluation process is put in perspective to lead the reader to a deep understanding of it. Additionally, different recommendations about their use and limitations as well as a critical view of the reviewed methods are presented according to the specific characteristics of the supervised classification problem scenario.
A Choice of Future m2m Access Technologies for Mobile Network Operators
It is predicted that by the early years of the next decade over 20 billion devices will be wirelessly connected in the Internet of Things (IoT). Many of these will use short-range wireless systems such as Bluetooth Smart, Wi-Fi or Zigbee, but, if so, they will depend on some private infrastructure being in-place, accessible and reliable. A ubiquitous public cellular network that was easy to use, penetrated deeply into almost all locations, and allowed for truly low-cost/low-energy devices capable of operating for years on a small battery, would be of enormous benefit. It would serve many existing machine-to-machine (m2m) applications such as metering, remote sensing, and telemetry; but more importantly would fuel the rapid development of the mass Internet of Things market by providing reliable and accessible connectivity for even the most low-cost/low-energy device. It would be a platform for substantial revenue growth for mobile network operators globally. Today's cellular networks have a few shortcomings in relation to the new demands from IoT. Whilst existing cellular technologies give in-building service they do not provide sufficiently deep coverage for some m2m applications such as metering. No current cellular technology (Rel-11 and earlier) can support very long terminal operating life on a small battery. Today, cellular GSM/GPRS comes closest to serving this market but does not sufficiently provide all characteristics of the ubiquitous cellular network for IoT. LTE, the latest cellular radio access technology, has been designed from the ground up to provide efficient mobile broadband data communications. Both LTE and UMTS/HSPA devices in their current forms are significantly more expensive than GSM/GPRS. This White Paper discusses two alternative approaches to address these concerns: an evolution of LTE; or the development of a dedicated new radio access technology. Either approach must combine the following characteristics: • Use licensed spectrum to allow controlled quality of service and provide global coverage, ideally over existing cellular bands using existing sites, transceivers and antennas • Support deep coverage for low-rate services into highly-shadowed locations such as basements, meter closets, manholes and even under ground • Support low-cost devices that could even be disposable. • Provide an adapted IPR licensing regime based on the FRAND principles and reflecting the reduced functionality that the new standard will provide for the specific M2M market (low cost, high volume). • Support very low device energy consumption allowing devices to operate for a decade or more on small primary batteries without …
Building a Data Collection for Deception Research
Research in high stakes deception has been held back by the sparsity of ground truth verification for data collected from real world sources. We describe a set of guidelines for acquiring and developing corpora that will enable researchers to build and test models of deceptive narrative while avoiding the problem of sanctioned lying that is typically required in a controlled experiment. Our proposals are drawn from our experience in obtaining data from court cases and other testimony, and uncovering the background information that enabled us to annotate claims made in the narratives as true or false.
A deep neural network approach for sentence boundary detection in broadcast news
This paper presents a deep neural network (DNN) approach to sentence boundary detection in broadcast news. We extrac t prosodic and lexical features at each inter-word position i n the transcripts and learn a sequential classifier to label these positions as either boundary or non-boundary. This work is rea lized by a hybrid DNN-CRF (conditional random field) architec ture. The DNN accepts prosodic feature inputs and non-linea rly maps them into boundary/non-boundary posterior probabili ty outputs. Subsequently, the posterior probabilities are co mbined with lexical features and the integrated features are model ed by a linear-chain CRF. The CRF finally labels the inter-word positions as boundary or non-boundary by Viterbi decoding. Ex periments show that, as compared with the state-of-the-art DTCRF approach [1], the proposed DNN-CRF approach achieves 16.7% and 4.1% reduction in NIST boundary detection error in reference and speech recognition transcripts, respective ly.
Emotional Integration and Advertising Effectiveness : Case of Perfumes Advertising
This paper examines emotions in advertising, its effects and functioning. Using an interview-based experiment on 256 participants, we found that emotions perceived during an advertising exposure could play an important role in eliciting responses towards the ad and the brand. However, this process is true provided that consumers associate the perceived emotion to the exposed brand or its consumption experience. Furthermore, we have identified efficiency differences between magazine ads, depending on how they visually describe emotions. In particular, we study emotional integration in advertising, i.e. salience of emotions expressed towards the ad, the presence of core brand information and clarity of the advertising message about the hedonic attributes of consumption. Interestingly, the impact of the staging process of emotions is moderated by respondents-specific variables, including their need for emotion, tolerance for ambiguity, their need for structure and need for cognition. KeywordsEmotional Reactions ;Emotional Integration; Need for Emotion; Tolerance for Ambiguity; Need for Structure; Need for Cognition; Advertising Effectiveness.
Boosting Response Aware Model-Based Collaborative Filtering
Recommender systems are promising for providing personalized favorite services. Collaborative filtering (CF) technologies, making prediction of users' preference based on users' previous behaviors, have become one of the most successful techniques to build modern recommender systems. Several challenging issues occur in previously proposed CF methods: (1) most CF methods ignore users' response patterns and may yield biased parameter estimation and suboptimal performance; (2) some CF methods adopt heuristic weight settings, which lacks a systematical implementation; and (3) the multinomial mixture models may weaken the computational ability of matrix factorization for generating the data matrix, thus increasing the computational cost of training. To resolve these issues, we incorporate users' response models into the probabilistic matrix factorization (PMF), a popular matrix factorization CF model, to establish the response aware probabilistic matrix factorization (RAPMF) framework. More specifically, we make the assumption on the user response as a Bernoulli distribution which is parameterized by the rating scores for the observed ratings while as a step function for the unobserved ratings. Moreover, we speed up the algorithm by a mini-batch implementation and a crafting scheduling policy. Finally, we design different experimental protocols and conduct systematical empirical evaluation on both synthetic and real-world datasets to demonstrate the merits of the proposed RAPMF and its mini-batch implementation.
Comprehensive Dataset of Broadcast Soccer Videos
In the absence of a rich dataset, sports video analysis remains a challenging task. Shot analysis, event detection, and player tracking are important aspects of soccer game research, however currently available datasets focus mainly on single-type annotations, and have several limitations when applied to full video analysis; these include a lack of training samples, temporal annotations, and rich labels of different types. In this paper, we focus on broadcast soccer videos and present a comprehensive dataset for analysis. In its current version, the dataset contains 222 broadcast soccer videos, totaling 170 video hours. The dataset covers three annotation types: (1) A shot boundary with two shot transition types, and shot type annotations with five shot types; (2) Event annotations with 11 event labels, and a story annotation with 15 story labels at coarser granularity; and (3) bounding boxes of the players under analysis in a subset of 19908 video frames. We hope that the introduction of this dataset will enable the research community to develop soccer video analysis techniques.
Rapid diagnosis of tuberculosis using Xpert MTB/RIF assay - Report from a developing country
OBJECTIVE To evaluate the diagnostic accuracy of the Xpert MTB/RIF assay for the detection of M. tuberculosis in pulmonary and extrapulmonary specimens and to compare it with conventional techniques. METHODS During a period of 10 months from December 2012 through September 2013, two hundred and forty five clinically TB suspects were enrolled for Xpert MTB\RIF assay. The cohort comprised of 205 suspects of pulmonary TB and 40 of extrapulmonary TB (EPTB). The 40 EPTB samples included pus aspirated from different sites of the body (n=19), pleural fluid (n=11), ascitic fluid (n=7), pericardial fluid, CSF and urine one each. Ziehl-Neelsen (ZN) Stained smear microscopy, culture on LJ media and Xpert MTB/RIF assay was performed on samples from these patients. RESULTS M. tuberculosis (MTB) were detected by Xpert MTB/RIF test in 111 (45.3%) out of 245 samples. Of these, 85 (34.7%) were smear positive on ZN staining and 102 (41.6%) were positive on LJ cultures. Rifampicin resistance was detected in 16 (6.5%) patients. Nine out of 19 pus samples (47.3%) were positive for MTB by Gene Xpert, 03 (15.8%) on ZN staining and 04 (21%) on LJ culture. MTB could not be detected in any other extrapulmonary sample. CONCLUSION Xpert MTB/RIF is a sensitive method for rapid diagnosis of Tuberculosis, especially in smear negative cases and in EPTB as compared to the conventional ZN staining. Among EPTB cases the highest yield of positivity was shown in Pus samples. For countries endemic for TB GeneXpert can serve as a sensitive and time saving diagnostic modality for pulmonary and EPTB.
Knowledge Management Applied to E-government Services: The Use of an Ontology
This paper is about the development and use of an ontology of egovernment services. We identify the knowledge required to deliver egovernment transaction services. Based on the SmartGov project, we describe the use of a domain map to assist in knowledge management and motivate the use of an ontology as a domain map. We describe the development of the egovernment service ontology and give a few examples of its definitions. We explain why the SmartGov project has adopted taxonomies, derived from the ontology, as its domain map. We highlight issues in ontology development and maintenance.
Relationship Between Achievement Goal Orientations and Use of Learning Strategies
This study aims to identify students' achievement goal orientations, learning strategies they use and the relationship between goal orientations and learning strategies. The sample included 189 students taking an educational psychology course at the undergraduate level. They filled out a questionnaire on goal orientations and learning strategies. Results indicate that the students are very close to mastery orientation and somewhat ego-social as a whole. Students use deep cognitive strategies often while they use surface and metacognitive strategies sometimes. Mastery orientation predicts the use of deep cognitive and metacognitive strategies, but when such an orientation is salient, less surface cognitive strategy use is expected. Egosocial orientation predicts surface cognitive strategy use, but does not relate to deep and metacognitive strategy use at all. Finally, work-avoidant orientation negatively correlated with both deep cognitive and metacognitive strategy use. (Contains 7 tables and 40 references.) (SLD) ******************************************************************************** Reproductions supplied by EDRS are the best that can be made from the original document. ******************************************************************************** 1 RELATIONSHIP BETWEEN ACHIEVEMENT GOAL ORIENTATIONS AND USE OF LEARNING STRATEGIES PERMISSION TO REPRODUCE AND DISSEMINATE THIS MATERIAL HAS BEEN GRANTED BY
An implementation study of the AODV routing protocol
The Ad hoc On-Demand Distance Vector (AODV) routing protocol is designed for use in ad hoc mobile networks. Because of t he difficulty of testing an ad hoc routing protocol in a real-world environme nt, a simulation was first created so that the protocol design could be tested i n a variety of scenarios. Once simulation of the protocol was nearly complete , the simulation was used as the basis for an implementation in the Linux opera ting system. In the course of converting the simulation into an implement ation, certain modifications were needed in AODV and the Linux kernel due to b th simplifications made in the simulation of AODV and to incompatib ilities of the Linux kernel and the IP-layer to routing in a mobile environment. This paper details many of the changes that were necessary during th e development of the implementation.
The cultural contagion of conflict.
Anecdotal evidence abounds that conflicts between two individuals can spread across networks to involve a multitude of others. We advance a cultural transmission model of intergroup conflict where conflict contagion is seen as a consequence of universal human traits (ingroup preference, outgroup hostility; i.e. parochial altruism) which give their strongest expression in particular cultural contexts. Qualitative interviews conducted in the Middle East, USA and Canada suggest that parochial altruism processes vary across cultural groups and are most likely to occur in collectivistic cultural contexts that have high ingroup loyalty. Implications for future neuroscience and computational research needed to understand the emergence of intergroup conflict are discussed.
Probabilistic Adaptive Computation Time
We present a probabilistic model with discrete latent variables that control the computation time in deep learning models such as ResNets and LSTMs. A prior on the latent variables expresses the preference for faster computation. The amount of computation for an input is determined via amortized maximum a posteriori (MAP) inference. MAP inference is performed using a novel stochastic variational optimization method. The recently proposed Adaptive Computation Time mechanism can be seen as an ad-hoc relaxation of this model. We demonstrate training using the generalpurpose Concrete relaxation of discrete variables. Evaluation on ResNet shows that our method matches the speed-accuracy trade-off of Adaptive Computation Time, while allowing for evaluation with a simple deterministic procedure that has a lower memory footprint.
Gradient Pursuits
Sparse signal approximations have become a fundamental tool in signal processing with wide-ranging applications from source separation to signal acquisition. The ever-growing number of possible applications and, in particular, the ever-increasing problem sizes now addressed lead to new challenges in terms of computational strategies and the development of fast and efficient algorithms has become paramount. Recently, very fast algorithms have been developed to solve convex optimization problems that are often used to approximate the sparse approximation problem; however, it has also been shown, that in certain circumstances, greedy strategies, such as orthogonal matching pursuit, can have better performance than the convex methods. In this paper, improvements to greedy strategies are proposed and algorithms are developed that approximate orthogonal matching pursuit with computational requirements more akin to matching pursuit. Three different directional optimization schemes based on the gradient, the conjugate gradient, and an approximation to the conjugate gradient are discussed, respectively. It is shown that the conjugate gradient update leads to a novel implementation of orthogonal matching pursuit, while the gradient-based approach as well as the approximate conjugate gradient methods both lead to fast approximations to orthogonal matching pursuit, with the approximate conjugate gradient method being superior to the gradient method.
Cellular HIV-1 DNA quantification and short-term and long-term response to antiretroviral therapy.
BACKGROUND The aim of our study was to determine whether HIV-1 DNA level before antiretroviral therapy (ART) was associated with short- and long-term virological and immunological responses. METHODS Patients starting first-line protease inhibitor-containing regimens were enrolled in a prospective multicentre cohort in 1998-99. HIV-1 DNA was quantified using real-time PCR at baseline and after 1 year of ART. The association between HIV-1 DNA and virological and immunological responses after 1 and 7 years on ART was studied in multivariate regression models along with other biological and clinical variables. Virological failure (VF) at month 12 (M12) was defined as a plasma HIV-1 RNA >500 copies/mL. Time to death or two plasma HIV-1 RNA >500 copies/mL between M12 and M84 was studied for long-term VF. RESULTS HIV-1 DNA levels were measured in 148 patients. The median baseline peripheral blood mononuclear cell (PBMC) HIV-1 DNA was 3.7 log(10) copies/10(6) PBMCs. At M12, the median PBMC HIV-1 DNA was 2.99 log(10) copies/10(6) PBMCs. The median decrease in PBMC HIV-1 DNA between M0 and M12 was -0.7 log(10) copies/10(6) PBMCs. Higher baseline PBMC HIV-1 DNA and plasma HIV-1 RNA were independently associated with a higher risk of VF at M12. Only the baseline plasma HIV-1 RNA was independently associated with long-term virological response. The baseline CD4 cell count was the only parameter associated with short- and long-term immunological responses. CONCLUSIONS HIV-1 DNA impacted the virological response in our cohort. Further research is warranted to study the impact of HIV-1 DNA with currently recommended first-line cART.
Validation of the Modified Fatigue Impact Scale in Parkinson's disease.
INTRODUCTION Fatigue is a common symptom in Parkinson's disease (PD); however, a multidimensional scale that measures the impact of fatigue on functioning has yet to be validated in this population. The aim of this study was to examine the validity of the Modified Fatigue Impact Scale (MFIS), a self-report measure that assesses the effects of fatigue on physical, cognitive, and psychosocial functioning, in a sample of nondemented PD patients. METHODS PD patients (N = 100) completed the MFIS, the Positive and Negative Affect Schedule (PANAS-X), and several additional measures of psychosocial, cognitive, and motor functioning. A Principal Component Analysis (PCA) and item analysis using Cronbach's alpha were conducted to determine structural validity and internal consistency of the MFIS. Correlational analyses were performed between the MFIS and the PANAS-X fatigue subscale to evaluate convergent validity and between the MFIS and measures of depression, anxiety, apathy, and disease-related symptoms to determine divergent validity. RESULTS The PCA identified two viable MFIS subscales: a cognitive subscale and a combination of the original scale's physical and psychosocial subscales as one factor. Item analysis revealed high internal consistency of all 21 items and the items within the two subscales. The MFIS had strong convergent validity with the PANAS-X fatigue subscale and adequate divergent validity with measures of disease stage, motor function, and cognition. CONCLUSION Overall, this study demonstrates that the MFIS is a valid multidimensional measure that can be used to evaluate the impact of fatigue on cognitive and physical/social functioning in PD patients without dementia.
A 77-GHz SiGe frequency multiplier (×18) for radar transceivers
For 77-GHz automotive radar applications, a monolithic frequency multiplier with a multiplication factor of 18 is presented. The main circuit of the multiplier chain consists of two frequency tripler and one doubler. Additionally interstage amplifiers and filters are integrated in a 200-GHz SiGe:C production technology. The output power is −1dBm for a wide input power range (−20dBm − +8 dBm) at room temperature and 76.5 GHz output frequency. The output power flatness is better than 2 dB for an output frequency range of 69 GHz to 80 GHz. The power consumption of the multiplier is 170mW at a single supply voltage of 3.3V.
Risk and prognostic factors for Japanese patients with chronic graft-versus-host disease after bone marrow transplantation
The incidence and prognostic factors for chronic graft-versus-host disease (cGVHD) were evaluated for 255 Japanese patients who survived more than 100 days after bone marrow transplantation, and of whom 119 (47%) developed cGVHD. Prior acute GVHD (grade 2–4) and use of an unrelated donor were significantly associated with the onset of cGVHD. Presence of cGVHD did not have an impact on mortality (hazard ratio (HR)=0.89; 95% confidence interval (CI), 0.59–1.3). Three factors at diagnosis were associated with cGVHD-specific survival: presence of infection (HR=4.1; 95% CI, 1.6–10.3), continuing use of corticosteroids at the onset of cGVHD (HR=3.9; 95% CI, 1.7–9.1), and a Karnofsky performance score <80 (HR=4.7; 95% CI, 2.0–11.3). The probability of cGVHD-specific survival at 4 years was 79% (95% CI, 70–86%). The severity and death rate of Japanese patients with cGVHD was lower than those for populations in Western countries, which might be the result of greater genetic homogeneity of Japanese ethnics. Our patients could not be accurately classified when the proposed prognostic models from Western countries were used, thus indicating the need for a different model to identify high-risk patients.
IoT based smart greenhouse
This work is primarily about the improvement of current agricultural practices by using modern technologies for better yield. This work provides a model of a smart greenhouse, which helps the farmers to carry out the work in a farm automatically without the use of much manual inspection. Greenhouse, [1], [2] being a closed structure protects the plants from extreme weather conditions namely: wind, hailstorm, ultraviolet radiations, and insect and pest attacks. The irrigation of agriculture field is carried out using automatic drip irrigation, which operates according to the soil moisture threshold set accordingly so as optimal amount of water is applied to the plants. Based on data from soil health card, proper amount of nitrogen, phosphorus, potassium and other minerals can be applied by using drip fertigation techniques. Proper water management tanks are constructed and they are filled with water after measuring the current water level using an ultrasonic sensor. Plants are also provided the requisite wavelength light during the night using growing lights. Temperature and air humidity are controlled by humidity and temperature sensors and a fogger is used to control the same. A tube well is controlled using GSM [3] module (missed call or sms). Bee-hive boxes are deployed for pollination and boxes are monitored using ultrasonic sensors to measure honey and send mails to the buyers when they are filled. Further, the readings collected from storage containers are uploaded to cloud service (Google drive) and can be forwarded to an e-commerce company.
Towards topological consistency and similarity of multiresolution geographical maps
Several application contexts require the ability to use together and compare different geographic datasets (maps) concerning the same or overlapping areas. This is for example the case of mediator systems, integrating distinct data sources for query processing, and GISs dealing with multi-resolution maps. In both cases, distinct maps may represent the same geographic feature with different geometry type (a road can be a region in one map and a line in another one). An important issue is therefore determining whether two multi-resolution maps are consistent, i.e., they represent the same area without contradictions, and, if not, if they are at least similar. In this paper we consider consistency and similarity of multi-resolution maps with respect to topological information. Existing approaches do not take feature geometry type into account. In this paper, we extend them with two notions of topological consistency, the first requiring the same topological relation between pairs of common features, the second `relaxing' the first one by considering similarity between topological relations. A similarity function for multi-resolution maps is then provided, taking into account both feature geometry types and topological relations of map objects. We finally discuss how the proposed consistency and similarity concepts can be significantly used in GIS applications. Some experimental results are also reported to show the effectiveness of the proposed approach.
Online minimization knapsack problem
Article history: Received 6 November 2014 Received in revised form 17 September 2015 Accepted 19 September 2015 Available online 28 September 2015 Communicated by A. Marchetti-Spaccamela
Deep Hybrid Models : Bridging Discriminative and Generative Approaches
A New Framework For Hybrid Models By Coupling Latent Variables 1 User specifies p with a generative and a discriminative component and latent z p(x, y, z) = p(y|x, z) · p(x, z). The p(y|x, z), p(x, z) can be very general; they only share latent z, not parameters! 2We train both components using a multi-conditional objective α · Eq(x,y)Eq(z|x) ` (y, p(y|x, z)) } {{ } discriminative loss (`2, log) +β ·Df [q(x, z)||p(x, z)] } {{ } f-divergence (KL, JS) where q(x, y) is data distribution and α, β > 0 are hyper-parameters.
Yield optimization using advanced statistical correlation methods
This work presents a novel yield optimization methodology based on establishing a strong correlation between a group of fails and an adjustable process parameter. The core of the methodology comprises three advanced statistical correlation methods. The first method performs multivariate correlation analysis to uncover linear correlation relationships between groups of fails and measurements of a process parameter. The second method partitions a dataset into multiple subsets and tries to maximize the average of the correlations each calculated based on one subset. The third method performs statistical independence test to evaluate the risk of adjusting a process parameter. The methodology was applied to an automotive product line to improve yield. Five process parameter changes were discovered which led to significant improvement of the yield and consequently significant reduction of the yield fluctuation.
A weighted additive fuzzy programming approach for multi-criteria supplier selection
In supply chain management, to build strategic and strong relationships, firms should select best suppliers by applying appropriate method and selection criteria. In this paper, to handle ambiguity and fuzziness in supplier selection problem effectively, a new weighted additive fuzzy programming approach is developed. Firstly, linguistic values expressed as trapezoidal fuzzy numbers are used to assess the weights of the factors. By applying the distances of each factor between Fuzzy Positive Ideal Rating and Fuzzy Negative Ideal Rating, weights are obtained. Then applying suppliers’ constraints, goals and weights of the factors, a fuzzy multi-objective linear model is developed to overcome the selection problem and assign optimum order quantities to each supplier. The proposed model is explained by a numerical example. 2010 Elsevier Ltd. All rights reserved.
Parallelization Strategies for Ant Colony Optimization
Ant Colony Optimization (ACO) is a new population oriented search metaphor that has been successfully applied toNP-hard combinatorial optimization problems. In this paper we discuss parallelization strategies for Ant Colony Optimization algorithms. We empirically test the most simple strategy, that of executing parallel independent runs of an algorithm. The empirical tests are performed applyingMAX–MIN Ant System, one of the most efficient ACO algorithms, to the Traveling Salesman Problem and show that using parallel independent runs is very effective.
Context-aware application scheduling in mobile systems: what will users do and not do next?
Usage patterns of mobile devices depend on a variety of factors such as time, location, and previous actions. Hence, context-awareness can be the key to make mobile systems to become personalized and situation dependent in managing their resources. We first reveal new findings from our own Android user experiment: (i) the launching probabilities of applications follow Zipf's law, and (ii) inter-running and running times of applications conform to log-normal distributions. We also find context-dependency in application usage patterns, for which we classify contexts in a personalized manner with unsupervised learning methods. Using the knowledge acquired, we develop a novel context-aware application scheduling framework, CAS that adaptively unloads and preloads background applications in a timely manner. Our trace-driven simulations with 96 user traces demonstrate the benefits of CAS over existing algorithms. We also verify the practicality of CAS by implementing it on the Android platform.
A nonlinear observer approach for concurrent estimation of pose, IMU bias and camera-to-IMU rotation
This paper concerns the problem of pose estimation for an inertial-visual sensor. It is well known that IMU bias, and calibration errors between camera and IMU frames can impair the achievement of high-quality estimates through the fusion of visual and inertial data. The main contribution of this work is the design of new observers to estimate pose, IMU bias and camera-to-IMU rotation. The observers design relies on an extension of the so-called passive complementary filter on SO(3). Stability of the observers is established using Lyapunov functions under adequate observability conditions. Experimental results are presented to assess this approach.
Classification criteria for systemic sclerosis subsets.
OBJECTIVE To evaluate the measurement properties of criteria for systemic sclerosis (SSc) subsets for classification of patients in SSc trials, and to determine if any one criteria set confers measurement advantage over others. METHODS A systematic review of articles describing classification criteria for SSc subsets was performed. Evidence supporting the sensibility (statement of purpose for which the criteria will be used, population, setting, face and content validity, and feasibility), validity, and reliability of the criteria was evaluated. RESULTS Fourteen sets of criteria for SSc subsets were identified. There is variability in the intended purpose and setting for which criteria sets are to be applied. Although face validity improves with the addition of less commonly encountered subsets or disease manifestations as criteria, the feasibility of implementing such criteria is conversely limited. Content validity for most criteria sets has not been evaluated due to lack of an explicitly stated conceptual framework for SSc. The criteria with 3 or more subsets do not provide incremental predictive validity over the 2-subset criteria. Our ability to compare subset criteria on divergent validity and reliability is limited by a lack of data. CONCLUSION The 2-subset criteria of LeRoy, et al have good feasibility, acceptable face validity, and good predictive validity. Further research is needed to compare the content validity, divergent validity, and reliability of these with other subset criteria for use in SSc trials.
Structural Deep Embedding for Hyper-Networks
Network embedding has recently attracted lots of attentions in data mining. Existing network embedding methods mainly focus on networks with pairwise relationships. In real world, however, the relationships among data points could go beyond pairwise, i.e., three or more objects are involved in each relationship represented by a hyperedge, thus forming hyper-networks. These hyper-networks pose great challenges to existing network embedding methods when the hyperedges are indecomposable, that is to say, any subset of nodes in a hyperedge cannot form another hyperedge. These indecomposable hyperedges are especially common in heterogeneous networks. In this paper, we propose a novel Deep Hyper-Network Embedding (DHNE) model to embed hypernetworks with indecomposable hyperedges. More specifically, we theoretically prove that any linear similarity metric in embedding space commonly used in existing methods cannot maintain the indecomposibility property in hypernetworks, and thus propose a new deep model to realize a non-linear tuplewise similarity function while preserving both local and global proximities in the formed embedding space. We conduct extensive experiments on four different types of hyper-networks, including a GPS network, an online social network, a drug network and a semantic network. The empirical results demonstrate that our method can significantly and consistently outperform the state-of-the-art algorithms.