title
stringlengths 8
300
| abstract
stringlengths 0
10k
|
---|---|
A Deep Multi-Modal CNN for Multi-Instance Multi-Label Image Classification | Deep convolutional neural networks (CNNs) have shown superior performance on the task of single-label image classification. However, the applicability of CNNs to multi-label images still remains an open problem, mainly because of two reasons. First, each image is usually treated as an inseparable entity and represented as one instance, which mixes the visual information corresponding to different labels. Second, the correlations amongst labels are often overlooked. To address these limitations, we propose a deep multi-modal CNN for multi-instance multi-label image classification, called MMCNN-MIML. By combining CNNs with multi-instance multi-label (MIML) learning, our model represents each image as a bag of instances for image classification and inherits the merits of both CNNs and MIML. In particular, MMCNN-MIML has three main appealing properties: 1) it can automatically generate instance representations for MIML by exploiting the architecture of CNNs; 2) it takes advantage of the label correlations by grouping labels in its later layers; and 3) it incorporates the textual context of label groups to generate multi-modal instances, which are effective in discriminating visually similar objects belonging to different groups. Empirical studies on several benchmark multi-label image data sets show that MMCNN-MIML significantly outperforms the state-of-the-art baselines on multi-label image classification tasks. |
1 TOURISM ONTOLOGY AND SEMANTIC MANAGEMENT SYSTEM : STATE-OFTHE-ARTS ANALYSIS | The global importance of tourism is steadily rising,creating new job opportunities in many countries. Today’s information management solutions for the complex tasks of tourism intermediaries are still at an early stage from a semantic point of view. This paper presents some preliminary results of OnTourism Project. OnTourism is aimed at (1) applying, concretizing and evaluating Semantic Web technologies such as ontologies, semantic annotation of content, and semantic search to the information-rich and economically important tourism domain, (2) identifying, developing and integrating reference ontologies for the tourism industry, and (3) showing the proof-of-concept in a real-world scenario of the Austrian tourism industry. First results presented in this paper identify publicly available tourism ontologies and existing freely available ontology management tools for the tourism domain. We identify seven tourism ontologies which are suitable as a basis for creating problem-specific ontologies. Furthermore we review and evaluate five freely available ontology management tools that are suited for application in the tourism domain. |
Magneto-Acoustic Resonator for Aquatic Animal Tracking | Over the past three decades, passive acoustic telemetry has significantly helped marine scientist to study and understand the spatial ecology, migratory behaviors, and mortality rates of aquatic animals. A popular telemetry system consists of two components: an acoustic transmitter tag attached to an aquatic animal and powered by a small battery, and a stationary station that receives the acoustic signals from the tagged animal and determines its location. The added weight and increased size of the tag introduced by the battery limit the implementation of this system to relatively large animals. Moreover, these tags have a limited operational time determined by the lifetime of the battery in combination with the measurement frequency and data resolution and transfer rate. In this paper, a self-powered magneto-acoustic resonator for animal tracking is proposed. It is achieved by utilizing the low-frequency motions of the animals to excite high-frequency acoustic pulses. The measurement results show that the device is capable of producing an average acoustic sound of 55 dB sound pressure level at 1 m of distance with a resonant frequency of 15 kHz. |
Catching a draft: on the process of selecting quarterbacks in the National Football League amateur draft | The reverse order college draft gives the worst teams in the National Football League (NFL) the opportunity to hire the best amateur talent. For it to work effectively, teams must be able to identify the ‘‘best’’ talent. Our study of NFL quarterbacks highlights problems with the draft process. We find only a weak correlation between teams’ evaluations on draft day and subsequent quarterback performance in the NFL. Moreover, many of the factors that enhance a quarterback’s draft position are unrelated to future NFL performance. Our analysis highlights the difficulties in evaluating workers in the uncertain environment of professional sports. |
Analysis of Three IoT-Based Wireless Sensors for Environmental Monitoring | The recent changes in climate have increased the importance of environmental monitoring, making it a topical and highly active research area. This field is based on remote sensing and on wireless sensor networks for gathering data about the environment. Recent advancements, such as the vision of the Internet of Things (IoT), the cloud computing model, and cyber-physical systems, provide support for the transmission and management of huge amounts of data regarding the trends observed in environmental parameters. In this context, the current work presents three different IoT-based wireless sensors for environmental and ambient monitoring: one employing User Datagram Protocol (UDP)-based Wi-Fi communication, one communicating through Wi-Fi and Hypertext Transfer Protocol (HTTP), and a third one using Bluetooth Smart. All of the presented systems provide the possibility of recording data at remote locations and of visualizing them from every device with an Internet connection, enabling the monitoring of geographically large areas. The development details of these systems are described, along with the major differences and similarities between them. The feasibility of the three developed systems for implementing monitoring applications, taking into account their energy autonomy, ease of use, solution complexity, and Internet connectivity facility, was analyzed, and revealed that they make good candidates for IoT-based solutions. |
Pharmacokinetics of modified-release prednisone tablets in healthy subjects and patients with rheumatoid arthritis. | In rheumatoid arthritis (RA), nocturnal release of proinflammatory cytokines is not adequately counteracted by endogenous glucocorticoid and is associated with symptoms of morning stiffness and pain. Taking exogenous glucocorticoid during the night reduces morning stiffness significantly more than treatment at the conventional time in the morning, although waking to take tablets is unacceptable for patients. Modified-release prednisone tablets were developed to allow administration at bedtime for programmed delivery of glucocorticoid during the night. Single-center crossover studies were conducted, each in ≤24 healthy subjects, to compare the pharmacokinetics of a single 5-mg oral dose of modified-release prednisone and conventional prednisone, as well as the effect of food on bioavailability. There was no substantial difference in pharmacokinetic parameters of the formulations apart from the programmed delay in release of glucocorticoid from the modified-release tablets (C(max) 97%, AUC(0-∞) 101%, 90% confidence intervals within the requisite range for bioequivalence). Administration after a full or light meal did not affect pharmacokinetic characteristics, but bioavailability was reduced under fasted conditions. Pharmacokinetic evaluation in 9 patients with RA confirmed that modified-release prednisone tablets taken at bedtime (around 22:00 h) with or after an evening meal result in programmed release of glucocorticoid 4 to 6 hours after intake. |
Projected Gradient Methods for Nonnegative Matrix Factorization | Nonnegative matrix factorization (NMF) can be formulated as a minimization problem with bound constraints. Although bound-constrained optimization has been studied extensively in both theory and practice, so far no study has formally applied its techniques to NMF. In this letter, we propose two projected gradient methods for NMF, both of which exhibit strong optimization properties. We discuss efficient implementations and demonstrate that one of the proposed methods converges faster than the popular multiplicative update approach. A simple Matlab code is also provided. |
Denoising Distantly Supervised Open-Domain Question Answering | Careful Reading Fast Skimming p1: As the capital of Ireland, Dublin is ... p3: Dublin is the capital of Ireland. Besides, Dublin is one of famous tourist cities in Ireland and ... p1: As the capital of Ireland, Dublin is ... p3: Dublin is the capital of Ireland. Besides, Ottawa is one of famous tourist cities in Ireland and ... Paragraph Selector Paragraph Reader P(pi |q,P) P(a|q,pi ) P(a|q,P)= P(a|q,pi )P(pi |q,P) pi∈P ∑ Question: |
Use of modified cornstarch therapy to extend fasting in glycogen storage disease types Ia and Ib. | BACKGROUND
Type I glycogen storage disease (GSD) is caused by a deficiency of glucose-6-phosphatase resulting in severe fasting hypoglycemia.
OBJECTIVE
We compared the efficacy of a new modified starch with the currently used cornstarch therapy in patients with type Ia and Ib GSD.
DESIGN
This was a randomized, 2-d, double-blinded, crossover pilot study comparing the commonly used uncooked cornstarch with the experimental starch in 12 subjects (6 GSDIa, 6 GSDIb) aged >or=13 y. At 2200, the subjects were given 100 g of digestible starch, and glucose and lactate were measured hourly until the subject's plasma glucose concentration reached 60 mg/dL or until the subject had fasted for 10 h. The order in which the products were tested was randomized in a blinded fashion.
RESULTS
The matched-pair Gehan rank test for censored survival was used to compare the therapies. The experimental starch maintained blood glucose concentrations significantly longer than did the traditional therapy (P = 0.013) in the 2-sided analysis. Most of the benefit was found to be after glucose concentrations fell below 70 mg/dL. The currently used cornstarch resulted in higher peak glucose concentrations and a more rapid rate of fall than did the new starch.
CONCLUSIONS
The experimental starch was superior to standard therapy in preventing hypoglycemia (<or=60 mg/dL). This therapy may allow patients with GSD to sleep through the night without awakening for therapy while enhancing safety. Additional studies are warranted to determine whether alternative dosing will further improve control in the therapeutic blood glucose range. |
Long-term clinical results of 2 different ablation strategies in patients with paroxysmal and persistent atrial fibrillation. | BACKGROUND
Data regarding the long-term efficacy of atrial fibrillation (AF) ablation are still lacking.
METHODS AND RESULTS
Two hundred four consecutive patients symptomatic for paroxysmal or persistent/permanent AF were randomly assigned to 2 different ablation schemes: pulmonary vein isolation (PVI) and PVI plus left linear lesions (LL). Primary end point was to assess the maintenance of sinus rhythm (SR) after procedures 1 and 2 in the absence of antiarrhythmic drugs in a long-term follow-up of at least 3 years. Paroxysmal AF- With a single procedure at 12-month follow-up, 46% of patients treated with PVI maintained SR, whereas at 3-year follow-up, 29% were in SR; using the "PVI plus LL" at the 12-month follow-up, 57% of patients were in SR, whereas at the 3-year follow-up, 53% remained in SR. After a second procedure, the long-term overall success rate without antiarrhythmic drugs was 62% with PVI and 85% with PVI plus LL. Persistent/Permanent AF- With a single procedure at the 12-month follow-up, 27% of patients treated with PVI were in SR, whereas at the 3-year follow-up, 19% maintained SR; using the PVI plus LL with a single procedure at the 12-month follow-up 45% of patients were in SR, whereas at the 3-year follow-up, 41% remained in SR. After a second procedure, the long-term overall success rate without antiarrhythmic drugs was 39% with PVI and 75% with PVI plus LL.
CONCLUSIONS
A long-term follow-up of AF ablation shows that short-term results cannot be considered permanent because AF recurrences are still present after the first year especially in patients who have had "PVI" strategy. PVI isolation plus LL is superior to the PVI strategy in maintaining SR without antiarrhythmic drugs after procedures 1 and 2 both in paroxysmal and persistent AF. |
The Dirty War Index: A Public Health and Human Rights Tool for Examining and Monitoring Armed Conflict Outcomes | Documentation, analysis, and prevention of the harmful effects of armed conflict on populations are established public health priorities [1–5]. Although public health research on war is increasingly framed in human rights terms [6–13], general public health methods are typically applied without direct links to laws of war. Laws of war are international humanitarian laws and customary standards regarding the treatment of civilians and combatants, mainly described in the four Geneva Conventions of 1949 and their Additional Protocols I and II regarding international and civil conflicts [14]. With notable exceptions [11,15–17], absolute numbers are usually reported (e.g., number of persons killed), without systematic description of the proportional effects of armed conflict, thereby limiting the utility of findings and scope of interpretation. In this paper, we introduce the “Dirty War Index” (DWI): a data-driven public health tool based on laws of war that systematically identifies rates of particularly undesirable or prohibited, i.e., “dirty,” war outcomes inflicted on populations during armed conflict (e.g., civilian death, child injury, or torture). DWIs are explicitly linked to international humanitarian law to make public health outcomes directly relevant to prevention, monitoring, and humanitarian intervention for the moderation of war’s effects. After choosing the particular outcome to be measured, a DWI is calculated as: |
A comprehensive grasp taxonomy | The goal of this work is to overview and summarize the grasping taxonomies reported in the literature. Our long term goal is to understand how to reduce mechanical complexity of anthropomorphic hands and still preserve their dexterity. On the basis of a literature survey, 33 different grasp types are taken into account. They were then arranged in a hierarchical manner, resulting in 17 grasp types. |
Multi Relational Data Mining Approaches: A Data Mining Technique | The multi relational data mining approach has developed as an alternative way for handling the structured data such that RDBMS. This will provides the mining in multiple tables directly. In MRDM the patterns are available in multiple tables (relations) from a relational database. As the data are available over the many tables which will affect the many problems in the practice of the data mining. To deal with this problem, one either constructs a single table by Propositionalisation, or uses a Multi-Relational Data Mining algorithm. MRDM approaches have been successfully applied in the area of bioinformatics. Three popular pattern finding techniques classification, clustering and association are frequently used in MRDM. Multi relational approach has developed as an alternative for analyzing the structured data such as relational database. MRDM allowing applying directly in the data mining in multiple tables. To avoid the expensive joining operations and semantic losses we used the MRDM technique. This paper focuses some of the application areas of MRDM and feature directions as well as the comparison of ILP, GM, SSDM and MRDM. |
Time Series Segmentation through Automatic Feature Learning | Internet of things (IoT) applications have become increasingly popular in recent years, with applications ranging from building energy monitoring to personal health tracking and activity recognition. In order to leverage these data, automatic knowledge extraction – whereby we map from observations to interpretable states and transitions – must be done at scale. As such, we have seen many recent IoT data sets include annotations with a human expert specifying states, recorded as a set of boundaries and associated labels in a data sequence. ese data can be used to build automatic labeling algorithms that produce labels as an expert would. Here, we refer to human-specified boundaries as breakpoints. Traditional changepoint detection methods only look for statistically-detectable boundaries that are defined as abrupt variations in the generative parameters of a data sequence. However, we observe that breakpoints occur on more subtle boundaries that are non-trivial to detect with these statistical methods. In this work, we propose a new unsupervised approach, based on deep learning, that outperforms existing techniques and learns the more subtle, breakpoint boundaries with a high accuracy. rough extensive experiments on various real-world data sets – including human-activity sensing data, speech signals, and electroencephalogram (EEG) activity traces – we demonstrate the effectiveness of our algorithm for practical applications. Furthermore, we show that our approach achieves significantly beer performance than previous methods. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for thirdparty components of this work must be honored. For all other uses, contact the owner/author(s). |
LivDet 2017 Fingerprint Liveness Detection Competition 2017 | Fingerprint Presentation Attack Detection (FPAD) deals with distinguishing images coming from artificial replicas of the fingerprint characteristic, made up of materials like silicone, gelatine or latex, and images coming from alive fingerprints. Images are captured by modern scanners, typically relying on solid-state or optical technologies. Since from 2009, the Fingerprint Liveness Detection Competition (LivDet) aims to assess the performance of the state-of-the-art algorithms according to a rigorous experimental protocol and, at the same time, a simple overview of the basic achievements. The competition is open to all academics research centers and all companies that work in this field. The positive, increasing trend of the participants number, which supports the success of this initiative, is confirmed even this year: 17 algorithms were submitted to the competition, with a larger involvement of companies and academies. This means that the topic is relevant for both sides, and points out that a lot of work must be done in terms of fundamental and applied research. |
Chinese Medical Question Answer Matching Using End-to-End Character-Level Multi-Scale CNNs | This paper focuses mainly on the problem of Chinese medical question answer matching, which is arguably more challenging than open-domain question answer matching in English due to the combination of its domain-restricted nature and the language-specific features of Chinese. We present an end-to-end character-level multi-scale convolutional neural framework in which character embeddings instead of word embeddings are used to avoid Chinese word segmentation in text preprocessing, and multi-scale convolutional neural networks (CNNs) are then introduced to extract contextual information from either question or answer sentences over different scales. The proposed framework can be trained with minimal human supervision and does not require any handcrafted features, rule-based patterns, or external resources. To validate our framework, we create a new text corpus, named cMedQA, by harvesting questions and answers from an online Chinese health and wellness community. The experimental results on the cMedQA dataset show that our framework significantly outperforms several strong baselines, and achieves an improvement of top-1 accuracy by up to 19%. |
A Test Architecture for Machine Learning Product | As machine learning (ML) technology continues to spread by rapid evolution, the system or service using Machine Learning technology, called ML product, makes big impact on our life, society and economy. Meanwhile, Quality Assurance (QA) for ML product is quite more difficult than hardware, non-ML software and service because performance of ML technology is much better than non-ML technology in exchange for the characteristics of ML product, e.g. low explainability. We must keep rapid evolution and reduce quality risk of ML product simultaneously. In this paper, we show a Quality Assurance Framework for Machine Learning product. Scope of QA in this paper is limited to product evaluation. First, a policy of QA for ML Product is proposed. General principles of product evaluation is introduced and applied to ML product evaluation as a part of the policy. They are composed of A-ARAI: Allowability, Achievability, Robustness, Avoidability and Improvability. A strategy of ML Product Evaluation is constructed as another part of the policy. Quality Integrity Level for ML product is also modelled. Second, we propose a test architecture of ML product testing. It consists of test levels and fundamental test types of ML product testing, including snapshot testing, learning testing and confrontation testing. Finally, we defines QA activity levels for ML product. |
MedGAN: Medical Image Translation using GANs | Image-to-image translation is considered a next frontier in the field of medical image analysis, with numerous potential applications. However, recent advances in this field offer individualized solutions by utilizing specialized architectures which are taskspecific or by suffering from limited capacities and thus requiring refinement through non end-to-end training. In this paper, we propose a novel general purpose framework for medical image-to-image translation, titled MedGAN, which operates in an end-to-end manner on the image level. MedGAN builds upon recent advances in the field of generative adversarial networks (GANs) by combining the adversarial framework with a unique combination of non-adversarial losses which captures the high and low frequency components of the desired target modality. Namely, we utilize a discriminator network as a trainable feature extractor which penalizes the discrepancy between the translated medical images and the desired modalities in the pixel and perceptual sense. Moreover, style-transfer losses are utilized to match the textures and fine-structures of the desired target images to the outputs. Additionally, we present a novel generator architecture, titled CasNet, which enhances the sharpness of the translated medical outputs through progressive refinement via encoderdecoder pairs. To demonstrate the effectiveness of our approach, we apply MedGAN on three novel and challenging applications: PET-CT translation, correction of MR motion artefacts and PET image denoising. Qualitative and quantitative comparisons with stateof-the-art techniques have emphasized the superior performance of the proposed framework. MedGAN can be directly applied as a general framework for future medical translation tasks. |
Bridging the Semantic Gap Between Image Contents and Tags | With the exponential growth of Web 2.0 applications, tags have been used extensively to describe the image contents on the Web. Due to the noisy and sparse nature in the human generated tags, how to understand and utilize these tags for image retrieval tasks has become an emerging research direction. As the low-level visual features can provide fruitful information, they are employed to improve the image retrieval results. However, it is challenging to bridge the semantic gap between image contents and tags. To attack this critical problem, we propose a unified framework in this paper which stems from a two-level data fusions between the image contents and tags: 1) A unified graph is built to fuse the visual feature-based image similarity graph with the image-tag bipartite graph; 2) A novel random walk model is then proposed, which utilizes a fusion parameter to balance the influences between the image contents and tags. Furthermore, the presented framework not only can naturally incorporate the pseudo relevance feedback process, but also it can be directly applied to applications such as content-based image retrieval, text-based image retrieval, and image annotation. Experimental analysis on a large Flickr dataset shows the effectiveness and efficiency of our proposed framework. |
Impact of landscape composition and configuration on forest specialist and generalist bird species in the fragmented Lacandona rainforest, Mexico | With accelerated land-use change throughout the tropics, an increasing proportion of global biodiversity is located in human-modified landscapes. Understanding the relative effects of landscape composition and configuration on biodiversity is needed to design effective conservation strategies. Yet this topic is poorly understood because most studies have been performed at the patch scale, and do not assess the impact of landscape structure on species. Using a multi-model inference approach, we assessed the relative effect of landscape composition (i.e. percentage of forest cover and matrix composition) and landscape configuration (i.e. number of forest patches and forest edge density) on aand b-diversity of birds in 17 forest fragments and three areas of continuous forest within the Lacandona rainforest, Mexico. We tested these impacts at two spatial scales (100 and 500 ha) for forest specialist and generalist birds. In general, forest specialist birds showed stronger responses to landscape characteristics than generalist species, particularly to variations in landscape composition at the 100-ha scale. The loss of forest cover represented the main threat to forest specialist birds, with a negative impact on a-diversity that was consistent across the two spatial scales. In contrast, at the two spatial scales generalist birds seemed to be favored by forest loss, as a-diversity of these birds increased in landscapes with lower forest cover and higher number of forest patches. If current deforestation rates continue, several forest specialists are likely to disappear. Future conservation and management initiatives should therefore prevent deforestation in this biodiversity-rich but vanishing tropical forest ecosystem. 2015 Elsevier Ltd. All rights reserved. |
Hospital re-admissions in relation to acute stroke unit care versus conventional care in elderly patients the first year after stroke: the Göteborg 70+ Stroke study. | BACKGROUND
re-hospitalisation after discharge following index stroke varies over time and with age and comorbidity. There is little knowledge about whether stroke unit care reduces the need of re-admissions.
OBJECTIVES
to examine whether stroke unit care as compared with care in general medical wards was associated with fewer re-hospitalisations for conditions judged to be secondary to acute stroke and to identify the influence of stroke severity on re-admission rates.
DESIGN
we conducted a one-year randomised study to compare the outcome of treatment at an acute stroke unit in a care continuum with the outcome of treatment at general medical wards.
SETTINGS
acute and geriatric hospitals in Göteborg, Sweden.
SUBJECTS
216 elderly patients aged >or=70 years discharged to their own homes or to institutionalised living after index stroke.
METHODS
comparison of comorbidity classified according to Charlson's morbidity index, re-admission rates, length of hospital stay, number of re-admissions and diagnoses between a group treated at a stroke unit and a group treated at general wards.
RESULTS
the re-admission rates, length of hospital stay and causes of re-admissions did not differ between the two groups. Complications related to the damage to the brain and concomitant heart disease were the most common causes of re-admissions in both groups. Index stroke severity did not influence the re-admission rates.
CONCLUSIONS
re-admissions for conditions judged to be secondary to acute stroke were equal in the two groups in this prospective study. |
Role of dipstick in detection of haeme pigment due to rhabdomyolysis in victims of Bam earthquake. | Avoiding life-threatening complications of rhabdomyolysis depends on early diagnosis and prompt management. The aim of this study was to evaluate the role of urinary dipstick test in the detection of haeme pigment in patients who were at risk of acute renal failure (ARF) due to rhabdomyolysis after suffering injury in the Bam earthquake. Serum creatine phosphokinase (CPK) level was used as the gold standard for prediction of ARF. ARF developed in 8 (10%) of 79 patients studied. We found no significant differences in the sensitivity, specificity and accuracy of dipstick urine and serum CPK tests for identifying patients who were at risk of ARF. However, dipstick urine test is an easy test that can be performed quickly at an earthquake site. |
T1C: IoT Security: - Threats, Security Challenges and IoT Security Research and Technology Trends | IoT Security: - Threats, Security Challenges and IoT Security Research and Technology Trends Sakir Sezer, Chair, Information and Communication Security, Queen's University Belfast, UK. |
Heterogeneity in the systems of pediatric diabetes care across the European Union. | BACKGROUND
It is known that the systems of pediatric diabetes care differ across the member states of the European Union (EU). The aim of this project was to characterize some of the main differences among the national systems.
METHODS
Data were collected using two questionnaires. The first one was distributed among leading centers of pediatric diabetes (one per country) with the aim of establishing an overview of the systems, national policies, quality control (QC) and financing of pediatric diabetes care. Responses were received from all 27 EU countries. The second questionnaire was widely disseminated among all 354 International Society for Pediatric and Adolescent Diabetes members with a domicile in an EU country; it included questions related to individual pediatric diabetes centers. A total of 108 datasets were collected and processed from healthcare professionals who were treating more than 29 000 children and adolescents with diabetes. Data on the reimbursement policies were verified by representatives of the pharmaceutical and medical device companies.
RESULTS
The collected data reflect the situation in 2009. There was a notable heterogeneity among the systems for provision of pediatric diabetes care across the EU. Only 20/27 EU countries had a pediatric diabetes register. Nineteen countries had officially recognized centers for pediatric diabetes, but only nine of them had defined criteria for becoming such a center. A system for QC of pediatric diabetes at the national level was reported in 7/26 countries. Reimbursement for treatment varied significantly across the EU, potentially causing inequalities in access to modern technologies.
CONCLUSIONS
The collected data help develop strategies toward improving equity and access to modern pediatric diabetes care across Europe. |
The three Graces, or the allegory of the gift | Marcel Mauss’ The gift is one of the most revered texts of social anthropology. It is also one of the most debated. But, paradoxically enough, these debates have not focused on the main cultural tradition to which the famous essay may be attached. In this article, I attempt to show that Mauss’ anthropological theorization of the gift perpetuates and slightly modifies a very ancient tradition of reflection, fundamentally based on a few concepts—charis, gratia, and grace—all of which played a crucial role in European cultural history. This article also reveals the specific function played in this context by the allegory and iconography of the three Graces. |
One deep music representation to rule them all? A comparative analysis of different representation learning strategies | Inspired by the success of deploying deep learning in the fields of Computer Vision and Natural Language Processing, this learning paradigm has also found its way into the field of Music Information Retrieval. In order to benefit from deep learning in an effective, but also efficient manner, deep transfer learning has become a common approach. In this approach, it is possible to reuse the output of a pre-trained neural network as the basis for a new learning task. The underlying hypothesis is that if the initial and new learning tasks show commonalities and are applied to the same type of input data (e.g., music audio), the generated deep representation of the data is also informative for the new task. Since, however, most of the networks used to generate deep representations are trained using a single initial learning source, their representation is unlikely to be informative for all possible future tasks. In this paper, we present the results of our investigation of what are the most important factors to generate deep representations for the data and learning tasks in the music domain. We conducted this investigation via an extensive empirical study that involves multiple learning sources, as well as multiple deep learning architectures with varying levels of information sharing between sources, in order to learn music representations. We then validate these representations considering multiple target datasets for evaluation. The results of our experiments yield several insights into how to approach the design of methods for learning widely deployable deep data representations in the music domain. |
Eliciting hierarchical structures from enumerative structures for ontology learning | Some discourse structures such as enumerative structures have typographical, punctuational and laying out characteristics which (1) make them easily identifiable and (2) convey hierarchical relations which provide ontology fragments clues. This study will try to show how these textual objects can be exploited in order to considerably improve the process of ontology enrichment from text. |
A Randomized, Controlled Trial of Cavity Shave Margins in Breast Cancer. | BACKGROUND
Routine resection of cavity shave margins (additional tissue circumferentially around the cavity left by partial mastectomy) may reduce the rates of positive margins (margins positive for tumor) and reexcision among patients undergoing partial mastectomy for breast cancer.
METHODS
In this randomized, controlled trial, we assigned, in a 1:1 ratio, 235 patients with breast cancer of stage 0 to III who were undergoing partial mastectomy, with or without resection of selective margins, to have further cavity shave margins resected (shave group) or not to have further cavity shave margins resected (no-shave group). Randomization occurred intraoperatively after surgeons had completed standard partial mastectomy. Positive margins were defined as tumor touching the edge of the specimen that was removed in the case of invasive cancer and tumor that was within 1 mm of the edge of the specimen removed in the case of ductal carcinoma in situ. The rate of positive margins was the primary outcome measure; secondary outcome measures included cosmesis and the volume of tissue resected.
RESULTS
The median age of the patients was 61 years (range, 33 to 94). On final pathological testing, 54 patients (23%) had invasive cancer, 45 (19%) had ductal carcinoma in situ, and 125 (53%) had both; 11 patients had no further disease. The median size of the tumor in the greatest diameter was 1.1 cm (range, 0 to 6.5) in patients with invasive carcinoma and 1.0 cm (range, 0 to 9.3) in patients with ductal carcinoma in situ. Groups were well matched at baseline with respect to demographic and clinicopathological characteristics. The rate of positive margins after partial mastectomy (before randomization) was similar in the shave group and the no-shave group (36% and 34%, respectively; P=0.69). After randomization, patients in the shave group had a significantly lower rate of positive margins than did those in the no-shave group (19% vs. 34%, P=0.01), as well as a lower rate of second surgery for margin clearance (10% vs. 21%, P=0.02). There was no significant difference in complications between the two groups.
CONCLUSIONS
Cavity shaving halved the rates of positive margins and reexcision among patients with partial mastectomy. (Funded by the Yale Cancer Center; ClinicalTrials.gov number, NCT01452399.). |
Experiential attributes and consumer judgments | Traditionally, marketers have focused on functional and meaningful product differentiation and have shown that such differentiation is important because consumers engage in a deliberate reasoning process (Chernev, 2001; Shafir et al., 1993; Simonson, 1989). However, nowadays products in many categories are functionally highly similar, and it is difficult for consumers to differentiate products based on functional attributes. An alternative way of differentiating is to emphasize non-functional product characteristics or certain aspects of the judgment context. For example, the VW New Beetle brand has used unique colors and shapes very prominently. Apple Computers has used a smiley face that appeared on the screen of computers when they were powered up as well as translucent colors to differentiate, for example, its iMac and iPod lines from competitive products. In addition, Apple Computers has integrated the colors and shapes of the product design with the design of its websites and the so-called AppleStores. Similar approaches focusing on colors, shapes or affective stimuli have been used for other global brands as well and for local brands in all sorts of product categories, including commodities like water and salt. Here we refer to such attributes, which have emerged in marketing as key differentiators, as ‘experiential attributes’ (Schmitt, 1999). Specifically, experiential attributes consist of non-verbal stimuli that include sensory cues such as colors (Bellizzi et al., 1983; Bellizzi and Hite, 1992; Degeratu et al., 2000; Gorn et al., 1997; Meyers-Levy and Peracchio, 1995) and shapes (Veryzer and Hutchinson, 1998) as well as affective cues such as mascots that may appear on products, packaging or contextually as part of ads (Holbrook and Hirschman, 1982; Keller, 1987). Experiential attributes are also used in logos (Henderson et al., 2003), and as part of the judgment context, for example, as backgrounds on websites (Mandel and Johnson, 2002) and in shopping environments (Spies et al., 1997). Unlike functional attributes, experiential attributes are not utilitarian (Zeithaml, 1988). Instead, experiential attributes may result in positive ‘feelings and experiences’ (Schwarz and Clore, 1996; Winkielman et al., 2003). Yet, how exactly do consumers process experiential attributes? How can consumers use them to reach a decision among alternatives? Moreover, are there different ways of processing experiential attributes? In this chapter, we examine how experiential attributes are processed and how they are of value in consumer decision-making. We distinguish two ways of processing experiential features: deliberate processing, which is similar to the way functional attributes are processed, and fluent processing, which occurs without much deliberation. We identify judgment contexts in which consumers process experiential |
Neural reuse: a fundamental organizational principle of the brain. | An emerging class of theories concerning the functional structure of the brain takes the reuse of neural circuitry for various cognitive purposes to be a central organizational principle. According to these theories, it is quite common for neural circuits established for one purpose to be exapted (exploited, recycled, redeployed) during evolution or normal development, and be put to different uses, often without losing their original functions. Neural reuse theories thus differ from the usual understanding of the role of neural plasticity (which is, after all, a kind of reuse) in brain organization along the following lines: According to neural reuse, circuits can continue to acquire new uses after an initial or original function is established; the acquisition of new uses need not involve unusual circumstances such as injury or loss of established function; and the acquisition of a new use need not involve (much) local change to circuit structure (e.g., it might involve only the establishment of functional connections to new neural partners). Thus, neural reuse theories offer a distinct perspective on several topics of general interest, such as: the evolution and development of the brain, including (for instance) the evolutionary-developmental pathway supporting primate tool use and human language; the degree of modularity in brain organization; the degree of localization of cognitive function; and the cortical parcellation problem and the prospects (and proper methods to employ) for function to structure mapping. The idea also has some practical implications in the areas of rehabilitative medicine and machine interface design. |
Two/Too Simple Adaptations of Word2Vec for Syntax Problems | We present two simple modifications to the models in the popular Word2Vec tool, in order to generate embeddings more suited to tasks involving syntax. The main issue with the original models is the fact that they are insensitive to word order. While order independence is useful for inducing semantic representations, this leads to suboptimal results when they are used to solve syntax-based problems. We show improvements in part-ofspeech tagging and dependency parsing using our proposed models. |
WebWatcher : A Tour Guide for the World Wide Web | 1 We explore the notion of a tour guide software agent for assisting users browsing the World Wide Web. A Web tour guide agent provides assistance similar to that provided by a human tour guide in a museum { it guides the user along an appropriate path through the collection, based on its knowledge of the user's interests, of the location and relevance of various items in the collection, and of the way in which others have interacted with the collection in the past. This paper describes a simple but operational tour guide, called WebWatcher, which has given over 5000 tours to people browsing CMU's School of Computer Science Web pages. WebWatcher accompanies users from page to page, suggests appropriate hyperlinks, and learns from experience to improve its advice-giving skills. We describe the learning algorithms used by WebWatcher, experimental results showing their e ectiveness, and lessons learned from this case study in Web tour guide agents. |
Stock Volatility Prediction Using Recurrent Neural Networks with Sentiment Analysis | In this paper, we propose a model to analyze sentiment of online stock forum and use the information to predict the stock volatility in the Chinese market. We have labeled the sentiment of the online financial posts and make the dataset public available for research. By generating a sentimental dictionary based on financial terms, we develop a model to compute the sentimental score of each online post related to a particular stock. Such sentimental information is represented by two sentiment indicators, which are fused to market data for stock volatility prediction by using the Recurrent Neural Networks (RNNs). Empirical study shows that, comparing to using RNN only, the model performs significantly better with sentimental indicators. |
The alkaloids of Banisteriopsis caapi, the plant source of the Amazonian hallucinogen Ayahuasca, stimulate adult neurogenesis in vitro | Banisteriopsis caapi is the basic ingredient of ayahuasca, a psychotropic plant tea used in the Amazon for ritual and medicinal purposes, and by interested individuals worldwide. Animal studies and recent clinical research suggests that B. caapi preparations show antidepressant activity, a therapeutic effect that has been linked to hippocampal neurogenesis. Here we report that harmine, tetrahydroharmine and harmaline, the three main alkaloids present in B. caapi, and the harmine metabolite harmol, stimulate adult neurogenesis in vitro. In neurospheres prepared from progenitor cells obtained from the subventricular and the subgranular zones of adult mice brains, all compounds stimulated neural stem cell proliferation, migration, and differentiation into adult neurons. These findings suggest that modulation of brain plasticity could be a major contribution to the antidepressant effects of ayahuasca. They also expand the potential application of B. caapi alkaloids to other brain disorders that may benefit from stimulation of endogenous neural precursor niches. |
Comparison of radial vibration forces in 10-pole/12-slot fractional slot surface-mounted and interior PM brushless AC machines | This paper compares the radial vibration forces in 10-pole/12-slot fractional-slot SPM and IPM machines which are designed to produce the same output torque, and employ an identical stator but different SPM, V-shape and arc-shape IPM rotor topologies. The airgap field and radial vibration force density distribution as a function of angular position and corresponding space harmonics (vibration modes) are analysed using the finite element method together with frozen permeability technique. It is shown that not only the lowest harmonic of radial force in IPM machine is much higher, but also the (2p)th harmonic of radial force in IPM machine is also higher than that in SPM machine. |
Improving Estimation Accuracy using Better Similarity Distance in Analogy-based Software Cost Estimation | Software cost estimation nowadays plays a more and more important role in practical projects since modern software projects become more and more complex as well as diverse. To help estimate software development cost accurately, this research does a systematic analysis of the similarity distances in analogy-based software cost estimation and based on this, a new non-orthogonal space distance (NoSD) is proposed as a measure of the similarities between real software projects. Different from currently adopted measures like the Euclidean distance and so on, this non-orthogonal space distance not only considers the different features to have different importance for cost estimation, but also assumes project features to have a non-orthogonal dependent relationship which is considered independent to each other in Euclidean distance. Based on such assumptions, NoSD method describes the non-orthogonal angles between feature axes using feature redundancy and it represents the feature weights using feature relevance, where both redundancy and relevance are defined in terms of mutual information. It can better reveal the real dependency relationships between real life software projects based on this non-orthogonal space distance. Also experiments show that it brings a greatest of 13.1% decrease of MMRE and a 12.5% increase of PRED(0.25) on ISBSG R8 dataset, and 7.5% and 20.5% respectively on the Desharnais dataset. Furthermore, to make it better fit the complex data distribution of real life software projects data, this research leverages the particle swarm optimization algorithm for an optimization of the proposed non-orthogonal space distance and proposes a PSO optimized non-orthogonal space distance (PsoNoSD). It brings further improvement in the estimation accuracy. As shown in experiments, compared with the normally used Euclidean distance, PsoNoSD improves the estimation accuracy by 38.73% and 11.59% in terms of MMRE and PRED(0.25) on ISBSG R8 dataset. On the Desharnais dataset, the improvements are 23.38% and 24.94% respectively. In summary, the new methods proposed in this research, which are based on theoretical study as well as systematic experiments, have solved some problems of currently used techniques and they show a great ability of notably improving the software cost estimation accuracy. |
A 3D Printer for Interactive Electromagnetic Devices | We introduce a new form of low-cost 3D printer to print interactive electromechanical objects with wound in place coils. At the heart of this printer is a mechanism for depositing wire within a five degree of freedom (5DOF) fused deposition modeling (FDM) 3D printer. Copper wire can be used with this mechanism to form coils which induce magnetic fields as a current is passed through them. Soft iron wire can additionally be used to form components with high magnetic permeability which are thus able to shape and direct these magnetic fields to where they are needed. When fabricated with structural plastic elements, this allows simple but complete custom electromagnetic devices to be 3D printed. As examples, we demonstrate the fabrication of a solenoid actuator for the arm of a Lucky Cat figurine, a 6-pole motor stepper stator, a reluctance motor rotor and a Ferrofluid display. In addition, we show how printed coils which generate small currents in response to user actions can be used as input sensors in interactive devices. |
Smartwatch-Based Keystroke Inference Attacks and Context-Aware Protection Mechanisms | Wearable devices, such as smartwatches, are furnished with state-of-the-art sensors that enable a range of context-aware applications. However, malicious applications can misuse these sensors, if access is left unaudited. In this paper, we demonstrate how applications that have access to motion or inertial sensor data on a modern smartwatch can recover text typed on an external QWERTY keyboard. Due to the distinct nature of the perceptible motion sensor data, earlier research efforts on emanation based keystroke inference attacks are not readily applicable in this scenario. The proposed novel attack framework characterizes wrist movements (captured by the inertial sensors of the smartwatch worn on the wrist) observed during typing, based on the relative physical position of keys and the direction of transition between pairs of keys. Eavesdropped keystroke characteristics are then matched to candidate words in a dictionary. Multiple evaluations show that our keystroke inference framework has an alarmingly high classification accuracy and word recovery rate. With the information recovered from the wrist movements perceptible by a smartwatch, we exemplify the risks associated with unaudited access to seemingly innocuous sensors (e.g., accelerometers and gyroscopes) of wearable devices. As part of our efforts towards preventing such side-channel attacks, we also develop and evaluate a novel context-aware protection framework which can be used to automatically disable (or downgrade) access to motion sensors, whenever typing activity is detected. |
Pathophysiological levels of soluble P-selectin mediate adhesion of leukocytes to the endothelium through Mac-1 activation. | Plasma soluble P-selectin (sP-selectin) levels are increased in pathologies associated with atherosclerosis, including peripheral arterial occlusive disease (PAOD). However, the role of sP-selectin in regulating leukocyte-endothelial adhesion is unclear. The aim of this study was to assess the ability of exogenous and endogenous sP-selectin to induce leukocyte responses that promote their adhesion to various forms of endothelium. In flow chamber assays, sP-selectin dose-dependently increased neutrophil adhesion to resting human iliac artery endothelial cells. Similarly, sP-selectin induced neutrophil adhesion to the endothelial surface of murine aortae and human radial venous segments in ex vivo flow chamber experiments. Using intravital microscopy to examine postcapillary venules in the mouse cremaster muscle, in vivo administration of sP-selectin was also found to significantly increase leukocyte rolling and adhesion in unstimulated postcapillary venules. Using a Mac-1-specific antibody and P-selectin knockout mouse, it was demonstrated that this finding was dependent on a contribution of Mac-1 to leukocyte rolling and endothelial P-selectin expression. This was confirmed in an ex vivo perfusion model using viable mouse aorta and human radial vessels. In contrast, with tumor necrosis factor-alpha-activated endothelial cells and intact endothelium, where neutrophil adhesion was already elevated, sP-selectin failed to further increase adhesion. Plasma samples from PAOD patients containing pathologically elevated concentrations of sP-selectin also increased neutrophil adhesion to the endothelium in a sP-selectin-dependent manner, as demonstrated by immunodepletion of sP-selectin. These studies demonstrate that raised plasma sP-selectin may influence the early progression of vascular disease by promoting leukocyte adhesion to the endothelium in PAOD, through Mac-1-mediated rolling and dependent on endothelial P-selectin expression. |
ThingPot: an interactive Internet-of-Things honeypot | The Mirai Distributed Denial-of-Service (DDoS) attack exploited security vulnerabilities of Internet-of-Things (IoT) devices and thereby clearly signaled that attackers have IoT on their radar. Securing IoT is therefore imperative, but in order to do so it is crucial to understand the strategies of such attackers. For that purpose, in this paper, a novel IoT honeypot called ThingPot is proposed and deployed. Honeypot technology mimics devices that might be exploited by attackers and logs their behavior to detect and analyze the used attack vectors. ThingPot is the first of its kind, since it focuses not only on the IoT application protocols themselves, but on the whole IoT platform. A Proof-of-Concept is implemented with XMPP and a REST API, to mimic a Philips Hue smart lighting system. ThingPot has been deployed for 1.5 months and through the captured data we have found five types of attacks and attack vectors against smart devices. The ThingPot source code is made available as open source. |
Online shopping post-payment dissonance: Dissonance reduction strategy using online consumer social experiences | This study applied the concept of online consumer social experiences (OCSEs) to reduce online shopping post-payment dissonance (i.e., dissonance occurring between online payment and product receipt). Two types of OCSEs were developed: indirect social experiences (IDSEs) and virtual social experiences (VSEs). Two studies were conducted, in which 447 college students were enrolled. Study 1 compared the effects of OCSEs and non-OCSEs when online shopping post-payment dissonance occurred. The results indicate that providing consumers affected by online shopping post-payment dissonance with OCSEs reduces dissonance and produces higher satisfaction, higher repurchase intention, and lower complaint intention than when no OCSEs are provided. In addition, consumers’ interpersonal trust (IPT) and susceptibility to interpersonal informational influence (SIII) moderated the positive effects of OCSEs. Study 2 compared the effects of IDSEs and VSEs when online shopping post-payment dissonance occurred. The results sugomputing need for control omputer-mediated communication pprehension gest that the effects of IDSEs and VSEs on satisfaction, repurchase intention, and complaint intention are moderated by consumers’ computing need for control (CNC) and computer-mediated communication apprehension (CMCA). The consumers with high CNC and low CMCA preferred VSEs, whereas the consumers with low CNC and high CMCA preferred IDSEs. The effects of VSEs and IDSEs on consumers with high CNC and CMCA and those with low CNC and CMCA were not significantly different. © 2017 Elsevier Ltd. All rights reserved. |
Exercise with blood flow restriction: an updated evidence-based approach for enhanced muscular development. | A growing body of evidence supports the use of moderate blood flow restriction (BFR) combined with low-load resistance exercise to enhance hypertrophic and strength responses in skeletal muscle. Research also suggests that BFR during low-workload aerobic exercise can result in small but significant morphological and strength gains, and BFR alone may attenuate atrophy during periods of unloading. While BFR appears to be beneficial for both clinical and athletic cohorts, there is currently no common consensus amongst scientists and practitioners regarding the best practice for implementing BFR methods. If BFR is not employed appropriately, there is a risk of injury to the participant. It is also important to understand how variations in the cuff application can affect the physiological responses and subsequent adaptation to BFR training. The optimal way to manipulate acute exercise variables, such as exercise type, load, volume, inter-set rest periods and training frequency, must also be considered prior to designing a BFR training programme. The purpose of this review is to provide an evidence-based approach to implementing BFR exercise. These guidelines could be useful for practitioners using BFR training in either clinical or athletic settings, or for researchers in the design of future studies investigating BFR exercise. |
Forgeries of Fingerprints in Forensic Science | The objective of this chapter is to provide an account of the considerations made in forensic science regarding issues associated with potential forgeries of fingerprints. We will start with a clarification of terms and define the production of forgeries and the fabrication of evidence based on fingerprints. A short historical account will be given to highlight that the raised issues coincide with the early days of fingerprinting. Various methods of production of forged fingers as published in the forensic literature will then be exposed, distinguishing the techniques requiring the cooperation of the donor and the techniques without the cooperation of the donor. Examples of the various types of forgeries with associated images will be shown. The ability of forensic experts to distinguish between genuine marks and fakes will then be discussed. Although manual inspection techniques, they may also provide a reference to biometrics practitioners in their development of computerised techniques. |
Deep Fusion Net for Multi-atlas Segmentation: Application to Cardiac MR Images | Atlas selection and label fusion are two major challenges in multi-atlas segmentation. In this paper, we propose a novel deep fusion net for better solving these challenges. Deep fusion net is a deep architecture by concatenating a feature extraction subnet and a non-local patchbased label fusion (NL-PLF) subnet in a single network. This network is trained end-to-end for automatically learning deep features achieving optimal performance in a NL-PLF framework. The learned deep features are further utilized in defining a similarity measure for atlas selection. Experimental results on Cardiac MR images for left ventricular segmentation demonstrate that our approach is effective both in atlas selection and multi-atlas label fusion, and achieves state of the art in performance. |
Learning from Positive and Unlabeled Examples with Different Data Distributions | We study the problem of learning from positive and unlabeled examples. Although several techniques exist for dealing with this problem, they all assume that positive examples in the positive set P and the positive examples in the unlabeled set U are generated from the same distribution. This assumption may be violated in practice. For example, one wants to collect all printer pages from the Web. One can use the printer pages from one site as the set P of positive pages and use product pages from another site as U. One wants to classify the pages in U into printer pages and non-printer pages. Although printer pages from the two sites have many similarities, they can also be quite different because different sites often present similar products in different styles and have different focuses. In such cases, existing methods perform poorly. This paper proposes a novel technique A-EM to deal with the problem. Experiment results with product page classification demonstrate the effectiveness of the proposed technique. |
Leveraging semantic signatures for bug search in binary programs | Software vulnerabilities still constitute a high security risk and there is an ongoing race to patch known bugs. However, especially in closed-source software, there is no straightforward way (in contrast to source code analysis) to find buggy code parts, even if the bug was publicly disclosed.
To tackle this problem, we propose a method called Tree Edit Distance Based Equational Matching (TEDEM) to automatically identify binary code regions that are "similar" to code regions containing a reference bug. We aim to find bugs both in the same binary as the reference bug and in completely unrelated binaries (even compiled for different operating systems). Our method even works on proprietary software systems, which lack source code and symbols.
The analysis task is split into two phases. In a preprocessing phase, we condense the semantics of a given binary executable by symbolic simplification to make our approach robust against syntactic changes across different binaries. Second, we use tree edit distances as a basic block-centric metric for code similarity. This allows us to find instances of the same bug in different binaries and even spotting its variants (a concept called vulnerability extrapolation). To demonstrate the practical feasibility of the proposed method, we implemented a prototype of TEDEM that can find real-world security bugs across binaries and even across OS boundaries, such as in MS Word and the popular messengers Pidgin (Linux) and Adium (Mac OS). |
Animal model and neurobiology of suicide | Animal models are formidable tools to investigate the etiology, the course and the potential treatment of an illness. No convincing animal model of suicide has been produced to date, and despite the intensive study of thousands of animal species naturalists have not identified suicide in nonhuman species in field situations. When modeling suicidal behavior in the animal, the greatest challenge is reproducing the role of will and intention in suicide mechanics. To overcome this limitation, current investigations on animals focus on every single step leading to suicide in humans. The most promising endophenotypes worth investigating in animals are the cortisol social-stress response and the aggression/impulsivity trait, involving the serotonergic system. Astroglia, neurotrophic factors and neurotrophins are implied in suicide, too. The prevention of suicide rests on the identification and treatment of every element increasing the risk. |
How to reach the dynamic limits of parallel robots? An autonomous control approach | Based on closed kinematic chains, parallel robots obtain favorable dynamic properties as well as high stiffness. Hence, their application can significantly enlarge the productivity of automated production processes. A control concept for tapping the high potential concerning low cycle times and high path-tracking accuracy is presented. The proposed approach adapts autonomously to changing dynamic parameters as varying payload. The autonomous behavior is achieved by combining an adaptive control approach with an adaptive, time-optimal trajectory planning concept and an online-trajectory adaption mechanism. Extensive experimental results prove the performance of the proposed approach. Note to Practitioners -Many applications in the field of production automation (material handling, assembly, etc.) require high operating speeds and accelerations. During the past years, parallel robots proved to be an efficient and suitable supplement to serial robots. Unfortunately, the promising possibilities of parallel robots often cannot yield profit because their dynamic potential is still not fully exploited. The payload/robot mass ratio of parallel structures is even higher compared to serial robots, where the influence of the payload on the impedance of the robot is negligible. By use of direct drives the influence of a variable payload cannot be ignored. A modified adaptive control concept, which adapts autonomously to changing dynamic parameters-as varying payload due to diversity of assembly processes-guarantees high tracking accuracy and therefore better process quality as well as accurate estimates of changing dynamic parameters and therefore better process quality. In addition the productivity of the process can be enlarged, if the full drive power can be used at each point on the path. Thus, a new adaptive time-optimal trajectory planning algorithm is used to exploit the dynamic potential of the direct drives and consequently to shorten the cycle times. The aim of time-optimal trajectory planning, as it is commonly understood, is the determination of the maximum velocity profile along a given path that complies with all given dynamic and kinematic robot constraints like limited drive forces/torques, limited path and/or drive velocities and limited path jerk. Combining the adaptive control scheme and the adaptive, time-optimal trajectory planning algorithm with an online trajectory adaption mechanism, a control concept is realized, which autonomously adapts to changing dynamic robot behavior. Using this new approach, the advantages of parallel robots-as well as serial robots with direct drives-can better be utilized. This is a necessary prerequisite for a larger extension of PKMs for industrial applications. |
A Study of MAC Protocols for WBANs | The seamless integration of low-power, miniaturised, invasive/non-invasive lightweight sensor nodes have contributed to the development of a proactive and unobtrusive Wireless Body Area Network (WBAN). A WBAN provides long-term health monitoring of a patient without any constraint on his/her normal dailylife activities. This monitoring requires the low-power operation of invasive/non-invasive sensor nodes. In other words, a power-efficient Medium Access Control (MAC) protocol is required to satisfy the stringent WBAN requirements, including low-power consumption. In this paper, we first outline the WBAN requirements that are important for the design of a low-power MAC protocol. Then we study low-power MAC protocols proposed/investigated for a WBAN with emphasis on their strengths and weaknesses. We also review different power-efficient mechanisms for a WBAN. In addition, useful suggestions are given to help the MAC designers to develop a low-power MAC protocol that will satisfy the stringent requirements. |
Enhancing the traditional hospital design process: a focus on patient safety. | BACKGROUND
In 2002 St. Joseph's Community Hospital (West Bend, WI), a member of SynergyHealth, brought together leaders in health care and systems engineering to develop a set of safety-driven facility design principles that would guide the hospital design process. DESIGNING FOR SAFETY: Hospital leadership recognized that a cross-departmental team approach would be needed and formed the 11-member Facility Design Advisory Council, which, with departmental teams and the aid of architects, was responsible for overseeing the design process and for ensuring that the safety considerations were met. The design process was a team approach, with input from national experts, patients and families, hospital staff and physicians, architects, contractors, and the community.
OUTCOME
The new facility, designed using safety-driven design principles, reflects many innovative design elements, including truly standardized patient rooms, new technology to minimize falls, and patient care alcoves for every patient room. The new hospital has been designed with maximum adaptability and flexibility in mind, to accommodate changes and provide for future growth. The architects labeled the innovative design. The Synergy Model, to describe the process of shaping the entire building and its spaces to work efficiently as a whole for the care and safety of patients.
CONCLUSION
Construction began on the new facility in August 2003 and is expected to be completed in 2005. |
Despeckle Filtering for Multiscale Amplitude-Modulation Frequency-Modulation (AM-FM) Texture Analysis of Ultrasound Images of the Intima-Media Complex | The intima-media thickness (IMT) of the common carotid artery (CCA) is widely used as an early indicator of cardiovascular disease (CVD). Typically, the IMT grows with age and this is used as a sign of increased risk of CVD. Beyond thickness, there is also clinical interest in identifying how the composition and texture of the intima-media complex (IMC) changed and how these textural changes grow into atherosclerotic plaques that can cause stroke. Clearly though texture analysis of ultrasound images can be greatly affected by speckle noise, our goal here is to develop effective despeckle noise methods that can recover image texture associated with increased rates of atherosclerosis disease. In this study, we perform a comparative evaluation of several despeckle filtering methods, on 100 ultrasound images of the CCA, based on the extracted multiscale Amplitude-Modulation Frequency-Modulation (AM-FM) texture features and visual image quality assessment by two clinical experts. Texture features were extracted from the automatically segmented IMC for three different age groups. The despeckle filters hybrid median and the homogeneous mask area filter showed the best performance by improving the class separation between the three age groups and also yielded significantly improved image quality. |
Tar formation in pyrolysis and gasification | This report summarises knowledge from the open literature on the reactivity of tars during pyrolysis and gasification of biomass. Also other mechanism of the chemical reactions involved is considered. The goal of this summary is to make the knowledge accessible not only to ECN but for a broader community and help in the development of both producer gas cleaning technology and innovative gasification processes. Gaseous biomass tars can react under inert conditions (thermal cracking) or with components in the producer gas such as H2, H2O or CO2 (gasification). The reaction rate of thermal cracking is such that high temperatures of approximately 1200°C or higher (also depending on residence time) are needed to create a producer gas with low tar concentrations. The rate of thermal cracking of tars depends on the kind of tar. The rate decreases in the series: biomass pyrolysis oils/tars > phenolic tar compounds (phenol, cresol, naphthol) > Pyrolysis tars from coal > Polycyclic aromatic tar compounds (anthracene, phenanthrene, naphthalene, benzene). The rate of thermal cracking also depends on the atmosphere in which the tars are cracked because the gas phase components H2, H2O en CO2 play a role in the cracking reactions. H2O and/or CO2 increase the decomposition rate of tars whereas H2 decreases this rate. The aromatic rings of tars can also be hydrogenated which only occurs under hydrogasification conditions at high partial pressures of H2. This leads to the production of CH4. Radical reactions are the main reactions in the mechanism of tar decomposition and the formation of methane. Radical formation is the rate-determining step in this mechanism. After radical formation, the composition of the gas phase determines what are the final products of the tar decomposition. |
On the Practicality of Cryptographically Enforcing Dynamic Access Control Policies in the Cloud | The ability to enforce robust and dynamic access controls on cloud-hosted data while simultaneously ensuring confidentiality with respect to the cloud itself is a clear goal for many users and organizations. To this end, there has been much cryptographic research proposing the use of (hierarchical) identity-based encryption, attribute-based encryption, predicate encryption, functional encryption, and related technologies to perform robust and private access control on untrusted cloud providers. However, the vast majority of this work studies static models in which the access control policies being enforced do not change over time. This is contrary to the needs of most practical applications, which leverage dynamic data and/or policies. In this paper, we show that the cryptographic enforcement of dynamic access controls on untrusted platforms incurs computational costs that are likely prohibitive in practice. Specifically, we develop lightweight constructions for enforcing role-based access controls (i.e., RBAC0) over cloud-hosted files using identity-based and traditional public-key cryptography. This is done under a threat model as close as possible to the one assumed in the cryptographic literature. We prove the correctness of these constructions, and leverage real-world RBAC datasets and recent techniques developed by the access control community to experimentally analyze, via simulation, their associated computational costs. This analysis shows that supporting revocation, file updates, and other state change functionality is likely to incur prohibitive overheads in even minimally-dynamic, realistic scenarios. We identify a number of bottlenecks in such systems, and fruitful areas for future work that will lead to more natural and efficient constructions for the cryptographic enforcement of dynamic access controls. Our findings naturally extend to the use of more expressive cryptographic primitives (e.g., HIBE or ABE) and richer access control models (e.g., RBAC1 or ABAC). |
Meditation states and traits: EEG, ERP, and neuroimaging studies. | Neuroelectric and imaging studies of meditation are reviewed. Electroencephalographic measures indicate an overall slowing subsequent to meditation, with theta and alpha activation related to proficiency of practice. Sensory evoked potential assessment of concentrative meditation yields amplitude and latency changes for some components and practices. Cognitive event-related potential evaluation of meditation implies that practice changes attentional allocation. Neuroimaging studies indicate increased regional cerebral blood flow measures during meditation. Taken together, meditation appears to reflect changes in anterior cingulate cortex and dorsolateral prefrontal areas. Neurophysiological meditative state and trait effects are variable but are beginning to demonstrate consistent outcomes for research and clinical applications. Psychological and clinical effects of meditation are summarized, integrated, and discussed with respect to neuroimaging data. |
Effects of smoking cessation or alcohol restriction on metabolic and fibrinolytic variables in Japanese men. | We investigated the effects of smoking cessation or alcohol restriction on metabolic and fibrinolytic variables in Japanese men. In the smoking study, 35 male subjects [32+/-1 (S.E.M.) years] who habitually smoked cigarettes (29+/-3 cigarettes/day) were told either to keep their usual smoking habits for 1 week, or to abstain from cigarette smoking, using a randomized crossover design. In the alcohol study, 33 male subjects (37+/-1 years) who habitually drank alcohol (64+/-6 ml of ethanol/day) were told either to keep their usual drinking habits for 3 weeks, or to reduce alcohol intake by at least up to a half of their usual drinking amount, using a randomized crossover design. In each study, venous blood samples were drawn after a 12-h overnight fast on the last day of each period, and metabolic and fibrinolytic variables were measured. One-week smoking cessation significantly increased serum high-density lipoprotein (HDL) cholesterol levels (P<0.05), and significantly decreased serum lipoprotein (a) levels (P<0.01) and plasma plasminogen activator inhibitor-1 levels (P<0.05). In contrast, 3-week alcohol restriction significantly decreased serum HDL cholesterol levels (P<0.05) and plasma tissue plasminogen activator levels (P<0.05). These results suggest that smoking cessation has substantial and immediate benefits on lipid and fibrinolytic variables in habitual smokers, whereas alcohol restriction increases cardiovascular risks, in some respects, in habitual drinkers. |
Implementation of Smart Contracts Using Hybrid Architectures with On and Off–Blockchain Components | Decentralised (on-blockchain) and centralised (off–blockchain) platforms are available for the implementation of smart contracts. However, none of the two alternatives can individually provide the services and quality of services (QoS) imposed on smart contracts involved in a large class of applications. The reason is that blockchain platforms suffer from scalability, performance, transaction costs and other limitations. Likewise, off–blockchain platforms are afflicted by drawbacks emerging from their dependence on single trusted third parties. We argue that in several applications, hybrid platforms composed from the integration of on and off–blockchain platforms are more adequate. Developers that informatively choose between the three alternatives are likely to implement smart contracts that deliver the expected QoS. Hybrid architectures are largely unexplored. To help cover the gap and as a proof of concept, in this paper we discuss the implementation of smart contracts on hybrid architectures. We show how a smart contract can be split and executed partially on an off–blockchain contract compliance checker and partially on the rinkeby ethereum network. To test the solution, we expose it to sequences of contractual operations generated mechanically by a contract validator tool. |
Somatic mutations of the APC gene in colorectal tumors: mutation cluster region in the APC gene. | We examined somatic mutations of the adenomatous polyposis coli (APC) gene in 63 colorectal tumors (16 adenomas and 47 carcinomas) developed in familial adenomatous polyposis (FAP) and non-FAP patients. In addition to loss of heterozygosity (LOH) at the APC locus in 30 tumors, 43 other somatic mutations were detected. Twenty-one of them were point mutations; 16 nonsense and two missense mutations, and three occurred in introns at the splicing site. Twenty-two tumors had frameshift mutations due to deletion or insertion; nineteen of them were deletions of one to 31 bp and three were a 1-bp insertion. One tumor had a 1-bp deletion in an intron near the splicing site. Hence, 41 (95%) of 43 mutations resulted in truncation of the APC protein. Over 60% of the somatic mutations in the APC gene were clustered within a small region of exon 15, designated as MCR (mutation cluster region), which accounted for less than 10% of the coding region. Combining these data and the results of LOH, more than 80% of tumors (14 adenomas and 39 carcinomas) had at least one mutation in the APC gene, of which more than 60% (9 adenomas and 23 carcinomas) had two mutations. These results strongly suggest that somatic mutations of the APC gene are associated with development of a great majority of colorectal tumors. |
The level and nature of autistic intelligence. | Autistics are presumed to be characterized by cognitive impairment, and their cognitive strengths (e.g., in Block Design performance) are frequently interpreted as low-level by-products of high-level deficits, not as direct manifestations of intelligence. Recent attempts to identify the neuroanatomical and neurofunctional signature of autism have been positioned on this universal, but untested, assumption. We therefore assessed a broad sample of 38 autistic children on the preeminent test of fluid intelligence, Raven's Progressive Matrices. Their scores were, on average, 30 percentile points, and in some cases more than 70 percentile points, higher than their scores on the Wechsler scales of intelligence. Typically developing control children showed no such discrepancy, and a similar contrast was observed when a sample of autistic adults was compared with a sample of nonautistic adults. We conclude that intelligence has been underestimated in autistics. |
Geography of Twitter networks | We use a sample of publicly available data on Twitter to study networks of mostly weak asymmetric ties. We show that a substantial share of ties lie within the same metropolitan region. As we examine ties between regional clusters, we find that distance, national borders and the difference in languages all affect the pattern of ties. However, Twitter connections show the more substantial correlation with the network of airline flights, highlighting the importance of looking not just at distance but at pre-existing ties between places. |
Quality and Trustworthiness in Qualitative Research in Counseling Psychology | This article examines concepts of the trustworthiness, or credibility, of qualitative research. Following a “researcher-as-instrument,” or self-reflective, statement, the paradigmatic underpinnings of various criteria for judging the quality of qualitative research are explored, setting the stage for a discussion of more transcendent standards (those not associated with specific paradigms) for conducting quality research: social validity, subjectivity and reflexivity, adequacy of data, and adequacy of interpretation. Finally, current guidelines for writing and publishing qualitative research are reviewed, and strategies for conducting and writing qualitative research reports are suggested. |
Mathematical model and control strategy of a two-wheeled self-balancing robot | In this paper a control strategy and sensor concept for a two-wheeled self-balancing robot is proposed. First a mathematical model of the robot is derived using Lagrangian mechanics. Based on the model a full state feedback controller, in combination with two higher-level controls are deployed for stabilization and drive control. A gyroscope, an accelerometer and rotational encoders are used for state determination, introducing a new method of measurement data fusion for the accelerometer and the gyro by using a drift compensation controller. Furthermore measurement proceedings for the model parameters of a real prototype robot are suggested and the control for this robot is designed. The proposed mathematical model, as well as the control strategy are then verified by comparing the behavior of the constructed robot with model simulations. |
Space-time video completion | We present a method for space-time completion of large space-time "holes" in video sequences of complex dynamic scenes. The missing portions are filled-in by sampling spatio-temporal patches from the available parts of the video, while enforcing global spatio-temporal consistency between all patches in and around the hole. This is obtained by posing the task of video completion and synthesis as a global optimization problem with a well-defined objective function. The consistent completion of static scene parts simultaneously with dynamic behaviors leads to realistic looking video sequences. Space-time video completion is useful for a variety of tasks, including, but not limited to: (i) Sophisticated video removal (of undesired static or dynamic objects) by completing the appropriate static or dynamic background information, (ii) Correction of missing/corrupted video frames in old movies, and (iii) Synthesis of new video frames to add a visual story, modify it, or generate a new one. Some examples of these are shown in the paper. |
Algorithms for the Longest Common Subsequence Problem | We start by def ining conven t ions and t e rmino logy that will be used th roughou t this paper . String C = clc~ ... cp is a subsequence of string A = aja2 "'" am if there is a mapp ing F : {1, 2 . . . . , p} ~ {1, 2, ... , m} such that F(i) = k only if c~ = ak and F is a m o n o t o n e strictly increasing funct ion (i .e. F(i) = u, F(]) = v, and i < j imply that u < v). C can be fo rmed by delet ing m p (not necessari ly ad jacen t ) symbols f rom A . F o r example , " c o u r s e " is a subsequence of " c o m p u t e r sc ience . " Str ing C is a c o m m o n s ubs equence of strings A and B if C is a s u b s e q u e n c e of A and also a subsequence of B. String C is a longest c o m m o n subsequence (abbrev ia ted LCS) of string A and B if C is a c o m m o n subsequence of A and B of maximal length , i .e. there is no c o m m o n subsequence of A and B that has grea te r length. Th roughou t this paper , we assume that A and B are strings of lengths m and n , m _< n , that have an LCS C of (unknown) length p . We assume that the symbols that may appea r in these strings c o m e f rom some a lphabet of size t . A symbol can be s tored in m e m o r y by using log t bits, which we assume will fit in one word of memory . Symbols can be c o m p a r e d (a -< b?) in one t ime unit . The n u m b e r of di f ferent symbols that actual ly appear in string B is def ined to be s (which must be less than n and t). The longest c o m m o n s u b s e q u e n c e prob lem has been solved by using a recurs ion re la t ionship on the length of the solut ion [7, 12, 16, 21]. These are general ly appl icable a lgor i thms that take O ( m n ) t ime for any input strings o f lengths m and n even though the lower bound on t ime of O ( m n ) need not apply to all inputs [2]. We present a lgor i thms that , depend ing on the na ture of the Input, may not requ i re quadra t ic t ime to r ecove r an LCS. The first a lgor i thm is appl icable in the genera l case and requi res O ( p n + n log n) t ime. T h e second a lgor i thm requi res t ime b o u n d e d by O((m + 1 p )p log n). In the c o m m o n special case where p is close to m , this a lgor i thm takes t ime |
Fast linear iterations for distributed averaging | We consider the problem of finding a linear iteration that yields distributed averaging consensus over a network, i.e., that asymptotically computes the average of some initial values given at the nodes. When the iteration is assumed symmetric, the problem of finding the fastest converging linear iteration can be cast as a semidefinite program, and therefore efficiently and globally solved. These optimal linear iterations are often substantially faster than several common heuristics that are based on the Laplacian of the associated graph. We show how problem structure can be exploited to speed up interior-point methods for solving the fastest distributed linear iteration problem, for networks with up to a thousand or so edges. We also describe a simple subgradient method that handles far larger problems, with up to one hundred thousand edges. We give several extensions and variations on the basic problem. |
Text classification and named entities for new event detection | New Event Detection is a challenging task that still offers scope for great improvement after years of effort. In this paper we show how performance on New Event Detection (NED) can be improved by the use of text classification techniques as well as by using named entities in a new way. We explore modifications to the document representation in a vector space-based NED system. We also show that addressing named entities preferentially is useful only in certain situations. A combination of all the above results in a multi-stage NED system that performs much better than baseline single-stage NED systems. |
A Conditional Random Field model for font forgery detection | Nowadays, document forgery is becoming a real issue. A large amount of documents that contain critical information as payment slips, invoices or contracts, are constantly subject to fraudster manipulation because of the lack of security regarding this kind of document. Previously, a system to detect fraudulent documents based on its intrinsic features has been presented. It was especially designed to retrieve copy-move forgery and imperfection due to fraudster manipulation. However, when a set of characters is not present in the original document, copy-move forgery is not feasible. Hence, the fraudster will use a text toolbox to add or modify information in the document by imitating the font or he will cut and paste characters from another document where the font properties are similar. This often results in font type errors. Thus, a clue to detect document forgery consists of finding characters, words or sentences in a document with font properties different from their surroundings. To this end, we present in this paper an automatic forgery detection method based on document font features. Using the Conditional Random Field a measurement of probability that a character belongs to a specific font is made by comparing the character font features to a knowledge database. Then, the character is classified as a genuine or a fake one by comparing its probability to belong to a certain font type with those of the neighboring characters. |
Kenyan coral reef lagoon fish: effects of fishing, substrate complexity, and sea urchins | Population density, number of species, diversity, and species-area relationships of fish species in eight common coral reef-associated families were studied in three marine parks receiving total protection from fishing, four sites with unregulated fishing, and one reef which recently received protection from fishing (referred to as a transition reef). Data on coral cover, reef topographic complexity, and sea urchin abundance were collected and correlated with fish abundance and species richness. The most striking result of this survey is a consistent and large reduction in the population density and species richness of 5 families (surgeonfish, triggerfish, butterflyfish, angelfish, and parrotfish). Poor recovery of parrotfish in the transition reef, relative to other fish families, is interpreted as evidence for competitive exclusion of parrotfish by sea urchins. Reef substrate complexity is significantly associated with fish abundance and diversity, but data suggest different responses for protected versus fished reefs, protected reefs having higher species richness and numbers of individuals than unprotected reefs for the same reef complexity. Sea urchin abundance is negatively associated with numbers of fish and fish species but the interrelationship between sea urchins, substrate complexity, coral cover, and management make it difficult to attribute a set percent of variance to each factor-although fishing versus no fishing appears to be the strongest variable in predicting numbers of individuals and species of fish, and their community similarity. Localized species extirpation is evident for many species on fished reefs (for the sampled area of 1.0 ha). Fifty-two of 110 species found on protected reefs were not found on unprotected reefs. |
Publicly Verifiable Inner Product Evaluation over Outsourced Data Streams under Multiple Keys | Uploading data streams to a resource-rich cloud server for inner product evaluation, an essential building block in many popular stream applications (e.g., statistical monitoring), is appealing to many companies and individuals. On the other hand, verifying the result of the remote computation plays a crucial role in addressing the issue of trust. Since the outsourced data collection likely comes from multiple data sources, it is desired for the system to be able to pinpoint the originator of errors by allotting each data source a unique secret key, which requires the inner product verification to be performed under any two parties’ different keys. However, the present solutions either depend on a single key assumption or powerful yet practically-inefficient fully homomorphic cryptosystems. In this paper, we focus on the more challenging multi-key scenario where data streams are uploaded by multiple data sources with distinct keys. We first present a novel homomorphic verifiable tag technique to publicly verify the outsourced inner product computation on the dynamic data streams, and then extend it to support the verification of matrix product computation. We prove the security of our scheme in the random oracle model. Moreover, the experimental result also shows the practicability of our design. |
Sentiment Analysis: Mining Opinions, Sentiments, and Emotions | With the increasing development of Web 2.0, such as social media and online businesses, the need for perception of opinions, attitudes, and emotions grows rapidly. Sentiment analysis, the topic studying such subjective feelings expressed in text, has attracted significant attention from both the research community and industry. Although we have known sentiment analysis as a task of mining opinions expressed in text and analyzing the entailed sentiments and emotions, so far the task is still vaguely defined in the research literature because it involves many overlapping concepts and sub-tasks. Because this is an important area of scientific research, the field needs to clear this vagueness and define various directions and aspects in detail, especially for students, scholars, and developers new to the field. In fact, the field includes numerous natural language processing tasks with different aims (such as sentiment classification, opinion information extraction, opinion summarization, sentiment retrieval, etc.) and these have multiple solution paths. Bing Liu has done a great job in this book in providing a thorough exploration and an anatomy of the sentiment analysis problem and conveyed a wealth of knowledge about different aspects of the field. |
Development of a robotic device for facilitating learning by children who have severe disabilities | This paper presents technical aspects of a robot manipulator developed to facilitate learning by young children who are generally unable to grasp objects or speak. The severity of these physical disabilities also limits assessment of their cognitive and language skills and abilities. The CRS robot manipulator was adapted for use by children who have disabilities. Our emphasis is on the technical control aspects of the development of an interface and communication environment between the child and the robot arm. The system is designed so that each child has user control and control procedures that are individually adapted. Control interfaces include large push buttons, keyboards, laser pointer, and head-controlled switches. Preliminary results have shown that young children who have severe disabilities can use the robotic arm system to complete functional play-related tasks. Developed software allows the child to accomplish a series of multistep tasks by activating one or more single switches. Through a single switch press the child can replay a series of preprogrammed movements that have a development sequence. Children using this system engaged in three-step sequential activities and were highly responsive to the robotic tasks. This was in marked contrast to other interventions using toys and computer games. |
Generalisation in humans and deep neural networks | We compare the robustness of humans and current convolutional deep neural networks (DNNs) on object recognition under twelve different types of image degradations. First, using three well known DNNs (ResNet-152, VGG-19, GoogLeNet) we find the human visual system to be more robust to nearly all of the tested image manipulations, and we observe progressively diverging classification error-patterns between humans and DNNs when the signal gets weaker. Secondly, we show that DNNs trained directly on distorted images consistently surpass human performance on the exact distortion types they were trained on, yet they display extremely poor generalisation abilities when tested on other distortion types. For example, training on salt-and-pepper noise does not imply robustness on uniform white noise and vice versa. Thus, changes in the noise distribution between training and testing constitutes a crucial challenge to deep learning vision systems that can be systematically addressed in a lifelong machine learning approach. Our new dataset consisting of 83K carefully measured human psychophysical trials provide a useful reference for lifelong robustness against image degradations set by the human visual system. |
Analysis of mushroom-like electromagnetic band gap structure using suspended transmission line technique | In this report, an analysis of a mushroom EBG (mEBG) structure using method of suspended transmission line has been discussed. This structure consists of nine elements of metallic patch grounded with metal. The parametric studies are patch width, substrate thickness, relative permittivity, gap width between adjacent patch and the size of via radius. These studies involve the bandstop and bandpass characteristic of the electromagnetic bad gap structure. The result shows that the bandstop frequency of mEBG structure is affected when varying each of the parameters. This is due to the changes of the equivalent LC network of the mushroom-like EBG structures. By increasing the value of the patch width and relative permittivity, the operating frequency and bandwidth will decay due to the increment of the total capacitance value. Varying the gap between adjacent patches does not give significant effect to the band gap frequency because it only changes the value of fringe capacitor, which is very small compared to the overall capacitances. Increasing the size of via radius will increase the frequency band gap and bandwidth due to the reduction of total capacitance. Lastly, increasing the substrate thickness will reduce the operating frequency however increase the bandwidth. This is because the value of inductor increases more than the increment of the total capacitance. |
Tracelet-based code search in executables | We address the problem of code search in executables. Given a function in binary form and a large code base, our goal is to statically find similar functions in the code base. Towards this end, we present a novel technique for computing similarity between functions. Our notion of similarity is based on decomposition of functions into tracelets: continuous, short, partial traces of an execution. To establish tracelet similarity in the face of low-level compiler transformations, we employ a simple rewriting engine. This engine uses constraint solving over alignment constraints and data dependencies to match registers and memory addresses between tracelets, bridging the gap between tracelets that are otherwise similar. We have implemented our approach and applied it to find matches in over a million binary functions. We compare tracelet matching to approaches based on n-grams and graphlets and show that tracelet matching obtains dramatically better precision and recall. |
Development of an Active dv/dt Control Algorithm for Reducing Inverter Conducted EMI with Minimal Impact on Switching Losses | This work investigates the use of an active gate control circuit to reduce the EMI produced by switching power converters. The active gate drive circuit makes it possible to adjust the voltage transition rate of a MOS-gated semiconductor switch on a pulse-by-pulse basis during PWM operation. This paper shows that tailored application of this circuit in a hard- switched inverter can be used to reduce the conducted common- mode EMI generated by the inverter while minimizing the incremental increase of the switching losses produced as a result of this control. An active dv/dt control algorithm that has been developed for a three-phase inverter to achieve this EMI reduction has been implemented for testing. Experimental results show that EMI can be reduced using this active dv/dt control algorithm with lower total switching losses than a scheme that achieves a similar EMI reduction using adjustment of the gate drive resistors alone. |
Rewarding, stimulant, and sedative alcohol responses and relationship to future binge drinking. | CONTEXT
Excessive consumption of alcohol is a major problem in the United States and abroad. Despite many years of study, it is unclear why some individuals drink alcohol excessively while others do not. It has been postulated that either lower or greater acute responses to alcohol, or both, depending on the limb of the breath alcohol concentration curve, contribute to propensity for alcohol misuse.
OBJECTIVE
To prospectively assess the relationship of acute alcohol responses to future binge drinking.
DESIGN
Within-subject, double-blind, placebo-controlled, multidose laboratory alcohol challenge study with intensive follow-up. Each participant completed 3 randomized sessions examining responses to a high (0.8 g/kg) and low (0.4 g/kg) alcohol dose and placebo, followed by quarterly assessments for 2 years examining drinking behaviors and alcohol diagnoses.
SETTING
Participants recruited from the community.
PARTICIPANTS
High-risk heavy social drinkers aged 21 to 35 years who habitually engage in weekly binge drinking (n = 104) and light drinker controls (n = 86).
INTERVENTION
We conducted 570 laboratory sessions with a subsequent 99.1% follow-up (1506 of 1520).
MAIN OUTCOME MEASURES
Biphasic Alcohol Effects Scale, Drug Effects Questionnaire, cortisol response, Timeline Follow-Back, Drinker Inventory of Consequences-Recent, and DSM-IV alcohol abuse and dependence.
RESULTS
Alcohol produced greater stimulant and rewarding (liking and wanting) responses and lower sedative and cortisol responses in heavy vs light drinkers. Among the heavy drinkers, greater positive effects and lower sedative effects after alcohol consumption predicted increased binge drinking frequency during follow-up. In turn, greater frequency of binge drinking during follow-up was associated with greater likelihood of meeting diagnostic criteria for alcohol abuse and dependence.
CONCLUSIONS
The widely held low level response theory and differentiator model should be revised: in high-risk drinkers, stimulant and rewarding alcohol responses even at peak breath alcohol concentrations are important predictors of future alcohol problems.
TRIAL REGISTRATION
clinicaltrials.gov Identifier: NCT00961792. |
A Criticism to Society (As Seen by Twitter Analytics) | Analytic tools are beginning to be largely employed, given their ability to rank, e.g., the visibility of social media users. Visibility that, in turns, can have a monetary value, since social media popular people usually either anticipate or establish trends that could impact the real world (at least, from a consumer point of view). The above rationale has fostered the flourishing of private companies providing statistical results for social media analysis. These results have been accepted, and largely diffused, by media without any apparent scrutiny, while Academia has moderately focused its attention on this phenomenon. In this paper, we provide evidence that analytic results provided by field-flagship companies are questionable (at least). In particular, we focus on Twitter and its "fake followers". We survey popular Twitter analytics that count the fake followers of some target account. We perform a series of experiments aimed at verifying the trustworthiness of their results. We compare the results of such tools with a machine-learning classifier whose methodology bases on scientific basis and on a sound sampling scheme. The findings of this work call for a serious re-thinking of the methodology currently used by companies providing analytic results, whose present deliveries seem to lack on any reliability. |
Multicenter study of nucleic acid amplification tests for detection of Chlamydia trachomatis and Neisseria gonorrhoeae in children being evaluated for sexual abuse. | BACKGROUND
Diagnosis of sexually transmitted infections in children suspected of sexual abuse is challenging due to the medico-legal implications of test results. Currently, the forensic standard for diagnosis of Chlamydia trachomatis (CT) and Neisseria gonorrhoeae (NG) infections is culture. In adults, nucleic acid amplification tests (NAATs) are superior to culture for CT, but these tests have been insufficiently evaluated in pediatric populations for forensic purposes.
METHODS
We evaluated the use of NAATs, using urine and genital swabs versus culture for diagnosis of CT and NG in children evaluated for sexual abuse in 4 US cities. Urine and a genital swab were collected for CT and NG NAATs along with routine cultures. NAAT positives were confirmed by PCR, using an alternate target.
RESULTS
Prevalence of infection among 485 female children were 2.7% for CT and 3.3% for NG by NAAT. The sensitivity of urine NAATs for CT and NG relative to vaginal culture was 100%. Eight participants with CT-positive and 4 with NG-positive NAATs had negative culture results (P = 0.018 for CT urine NAATs vs. culture). There were 24 of 485 (4.9%) female participants with a positive NAAT for CT or NG or both versus 16 of 485 (3.3%) with a positive culture for either, resulting in a 33% increase in children with a positive diagnosis.
CONCLUSIONS
These results suggest that NAATs on urine, with confirmation, are adequate for use as a new forensic standard for diagnosis of CT and NG in children suspected of sexual abuse. Urine NAATs offer a clear advantage over culture in sensitivity and are less invasive than swabs, reducing patient trauma and discomfort. |
SemEval-2017 Task 1: Semantic Textual Similarity Multilingual and Crosslingual Focused Evaluation | Semantic Textual Similarity (STS) measures the meaning similarity of sentences. Applications include machine translation (MT), summarization, generation, question answering (QA), short answer grading, semantic search, dialog and conversational systems. The STS shared task is a venue for assessing the current state-of-the-art. The 2017 task focuses on multilingual and cross-lingual pairs with one sub-track exploring MT quality estimation (MTQE) data. The task obtained strong participation from 31 teams, with 17 participating in all language tracks. We summarize performance and review a selection of well performing methods. Analysis highlights common errors, providing insight into the limitations of existing models. To support ongoing work on semantic representations, the STS Benchmark is introduced as a new shared training and evaluation set carefully selected from the corpus of English STS shared task data (2012-2017). |
Revision arthroscopic rotator cuff repair: repair integrity and clinical outcome. | BACKGROUND
Literature regarding the outcomes of revision rotator cuff repair is limited. The purposes of the present study were to report the tendon repair integrity and clinical outcomes for a cohort of patients following revision arthroscopic rotator cuff repair and to examine factors related to tendon healing and the influence of healing on clinical outcomes.
METHODS
Twenty-one of twenty-nine consecutive revision arthroscopic rotator cuff repairs with a minimum of two years of postoperative follow-up were retrospectively reviewed. Outcomes were evaluated on the basis of a visual analog pain scale, the range of motion of the shoulder, the Simple Shoulder Test, the American Shoulder and Elbow Surgeons score, and the Constant score. Ultrasonography was used to examine repair integrity at a minimum of one year following surgery. Ten shoulders underwent arthroscopic repair of a recurrent single-tendon posterior rotator cuff tear, whereas eleven shoulders had repair of both the supraspinatus and infraspinatus.
RESULTS
The mean age of the twenty-one subjects was 55.6 years; thirteen subjects were male and eight were female. Complete preoperative and postoperative clinical data were available for nineteen subjects after an average duration of follow-up of thirty-three months. Significant improvements were seen in terms of postoperative pain (p < 0.05), the Simple Shoulder Test score (p < 0.05), the American Shoulder and Elbow Surgeons function (p < 0.05) and total scores (p < 0.05), active forward elevation (p < 0.05), and active external rotation (p < 0.05). Postoperative ultrasound data were available for all twenty-one shoulders after a mean duration of follow-up of twenty-five months. Ten (48%) of the twenty-one shoulders had an intact repair. Seven (70%) of the ten single-tendon repairs were intact, compared with three (27%) of the eleven supraspinatus/infraspinatus repairs (p = 0.05). Patient age (p < 0.05) and the number of torn tendons (p = 0.05) had significant effects on postoperative tendon repair integrity. Shoulders with an intact repair had better postoperative Constant scores (p < 0.05) and scapular plane elevation strength (p < 0.05) in comparison with those with a recurrent tear.
CONCLUSIONS
Revision arthroscopic rotator cuff repair results in reliable pain relief and improvement in shoulder function in selected cases. Approximately half of the revision repairs can be expected to be intact at a minimum of one year following surgery. Patient age and the number of torn tendons are related to postoperative tendon integrity. The postoperative integrity of the rotator cuff can have a significant influence on shoulder abduction strength and the Constant score. |
Dysphagia The Ability of the 10-Item Eating Assessment Tool ( EAT-10 ) to Predict Aspiration Risk in Persons With | Background: Dysphagia is common and costly. The ability of patient symptoms to predict objective swallowing dysfunction is uncertain. Purpose: This study aimed to evaluate the ability of the Eating Assessment Tool (EAT-10) to screen for aspiration risk in patients with dysphagia. Methods: Data from individuals with dysphagia undergoing a videofluoroscopic swallow study between January 2012 and July 2013 were abstracted from a clinical database. Data included the EAT-10, Penetration Aspiration Scale (PAS), total pharyngeal transit (TPT) time, and underlying diagnoses. Bivariate linear correlation analysis, sensitivity, specificity, and predictive values were calculated. Results: The mean age of the entire cohort (N = 360) was 64.40 (± 14.75) years. Forty-six percent were female. The mean EAT-10 was 16.08 (± 10.25) for nonaspirators and 23.16 (± 10.88) for aspirators (P < .0001). There was a linear correlation between the total EAT-10 score and the PAS (r = 0.273, P < .001). Sensitivity and specificity of an EAT-10 > 15 in predicting aspiration were 71% and 53%, respectively. Conclusion: Subjective dysphagia symptoms as documented with the EAT-10 can predict aspiration risk. A linear correlation exists between the EAT-10 and aspiration events (PAS) and aspiration risk (TPT time). Persons with an EAT10 > 15 are 2.2 times more likely to aspirate (95% confidence interval, 1.3907-3.6245). The sensitivity of an EAT-10 > 15 is 71%. |
Finding a benchmark for monitoring hospital cleanliness. | This study evaluated three methods for monitoring hospital cleanliness. The aim was to find a benchmark that could indicate risk to patients from a contaminated environment. We performed visual monitoring, ATP bioluminescence and microbiological screening of five clinical surfaces before and after detergent-based cleaning on two wards over a four-week period. Five additional sites that were not featured in the routine domestic specification were also sampled. Measurements from all three methods were integrated and compared in order to choose appropriate levels for routine monitoring. We found that visual assessment did not reflect ATP values nor environmental contamination with microbial flora including Staphylococcus aureus and meticillin-resistant S. aureus (MRSA). There was a relationship between microbial growth categories and the proportion of ATP values exceeding a chosen benchmark but neither reliably predicted the presence of S. aureus or MRSA. ATP values were occasionally diverse. Detergent-based cleaning reduced levels of organic soil by 32% (95% confidence interval: 16-44%; P<0.001) but did not necessarily eliminate indicator staphylococci, some of which survived the cleaning process. An ATP benchmark value of 100 relative light units offered the closest correlation with microbial growth levels <2.5 cfu/cm(2) (receiver operating characteristic ROC curve sensitivity: 57%; specificity: 57%). In conclusion, microbiological and ATP monitoring confirmed environmental contamination, persistence of hospital pathogens and measured the effect on the environment from current cleaning practices. This study has provided provisional benchmarks to assist with future assessment of hospital cleanliness. Further work is required to refine practical sampling strategy and choice of benchmarks. |
Toward Fast and Accurate Neural Discourse Segmentation | Discourse segmentation, which segments texts into Elementary Discourse Units, is a fundamental step in discourse analysis. Previous discourse segmenters rely on complicated hand-crafted features and are not practical in actual use. In this paper, we propose an endto-end neural segmenter based on BiLSTMCRF framework. To improve its accuracy, we address the problem of data insufficiency by transferring a word representation model that is trained on a large corpus. We also propose a restricted self-attention mechanism in order to capture useful information within a neighborhood. Experiments on the RST-DT corpus show that our model is significantly faster than previous methods, while achieving new stateof-the-art performance. 1 |
Spinal manipulation and home exercise with advice for subacute and chronic back-related leg pain: a trial with adaptive allocation. | Context Few studies evaluate the comparative effectiveness of conservative treatments for back-related leg pain. Contribution This randomized trial, involving 192 adults with subacute or chronic back-related leg pain, compared 12 weeks of home exercise and advice with spinal manipulative therapy plus home exercise and advice. Spinal manipulative therapy with home exercise and advice improved self-reported pain and function outcomes more than exercise and advice alone at 12 weeks, but differences between groups were not present at 52 weeks except for some secondary outcomes. Caution The intervention was not blinded. Implication Spinal manipulative therapy combined with home exercise and advice can improve short-term outcomes in patients with back-related leg pain. The Editors Back-related leg pain (BRLP) is an important symptom commonly associated with pervasive low back pain (LBP) conditions and, despite its socioeconomic effect, has been generally understudied. With poorer prognosis and quality of life, persons with BRLP have greater pain severity and incur more work loss, medication use, surgery, and health-related costs than those with uncomplicated LBP (16). Most patients with BRLP are treated with prescription medications and injections, although little to no evidence supports their use (7, 8). Surgical approaches are also commonly applied, although there is only some evidence for short-term effectiveness compared with less invasive treatments (9). Concerns are mounting about the overuse, costs, and safety of these conventional medical treatments (1018), warranting identification of more conservative treatment options. Spinal manipulative therapy (SMT), exercise, and education promoting self-management are increasingly recommended as low-risk strategies for BRLP (19). Although limited, evidence shows that these conservative approaches can be effective (2026). A recent systematic review by our group showed that SMT is superior to sham SMT for acute BRLP in the short and long term; however, the evidence for subacute and chronic BRLP is inconclusive, and high-quality research is needed to inform clinical and health policy decisions (20). The underlying mechanisms of SMT seem to be multifactorial, including improvement in spinal stiffness, muscle recruitment, and synaptic efficacy of central neurons (27, 28). The purpose of this study was to test the hypothesis that the addition of SMT to home exercise and advice (HEA) would be more effective than HEA alone for patients with subacute and chronic BRLP. Methods Design Overview This pragmatic trial used a parallel design with allocation by minimization and has been described previously (29). Patients were recruited between 2007 and 2010, and follow-up was completed in 2011. Institutional review boards approved the study protocol, and all patients provided written consent. The primary outcomes and most secondary outcomes were self-reported; objective measures were obtained by blinded examiners. There were no important changes to methods after trial commencement. Settings and Patients The trial was conducted at institution-affiliated research clinics at Northwestern Health Sciences University (Minneapolis, Minnesota) and Palmer College of Chiropractic (Davenport, Iowa). Patients were recruited through newspaper advertisements, direct mail, and community posters. Interested patients were initially screened by telephone interviews, followed by 2 in-person baseline evaluation visits. Inclusion criteria were age 21 years or older; BRLP based on Quebec Task Force on Spinal Disorders classifications 2, 3, 4, or 6 (radiating pain into the proximal or distal part of the lower extremity, with or without neurologic signs) (30); BRLP severity of 3 or greater (scale of 0 to 10); a current episode of 4 weeks or more; and a stable prescription medication plan in the previous month. Exclusion criteria were Quebec Task Force on Spinal Disorders classifications of 1, 5, 7, 8, 9, 10, and 11 (pain without radiation into the lower extremities, progressive neurologic deficits, the cauda equina syndrome, spinal fracture, spinal stenosis, surgical lumbar spine fusion, several incidents of lumbar spine surgery, chronic pain syndrome, visceral diseases, compression fractures or metastases, blood clotting disorders, severe osteoporosis, and inflammatory or destructive tissue changes of the spine). Patients could not be receiving ongoing treatment of leg pain or LBP; be pregnant or nursing; have current or pending litigation for worker's compensation, disability, or personal injury; be unable to read or comprehend English; or have evidence of substance abuse. Allocation A Web-based program assigned patients to treatment after the second baseline visit using a minimization algorithm based on the Taves method (31), balancing on 7 baseline characteristics previously shown to influence outcomes (3234). Baseline characteristics included age, BRLP duration, neurologic signs, distress, positive straight leg raise, time spent driving a vehicle, and pain aggravation with coughing or sneezing. Patients were assigned in a 1:1 ratio, stratified by site. The allocation algorithm was prepared by the study statistician before enrollment, and its administration was concealed from study personnel. Interventions The intervention protocols were developed and tested in previous pilot studies (32, 33). Both interventions were intended to be pragmatic in nature (for example, modified to patient presentation and needs) and were informed by commonly recommended clinical practices, patient preferences, and promising research evidence (19, 3538). Eleven chiropractors with a minimum of 5 years of practice experience delivered SMT in the SMT plus HEA group. Thirteen providers (7 chiropractors, 5 exercise therapists, and 1 personal trainer) delivered the HEA intervention. When possible, patients worked with the same providers during the 12-week course of care; however, to accommodate patient and provider schedules during the intervention period, providers were trained to comanage patients. Treatment fidelity was facilitated through standardized training, manuals of operation, and clinical documentation forms that were monitored weekly by research staff. SMT Plus HEA Group As many as 20 SMT visits were allowed, each lasting 10 to 20 minutes, including a brief history and examination. Patients assigned to SMT plus HEA also attended 4 HEA visits, as described in the HEA Group section. For SMT visits, the primary focus of treatment was on manual techniques (including high-velocity, low amplitude thrust procedures or low-velocity, variable amplitude mobilization maneuvers to the lumbar vertebral or sacroiliac joints). The specific spinal level treated and the number and frequency of SMT visits were determined by the clinician on the basis of patient-reported symptoms, palpation, and pain provocation tests (39). Adjunct therapies to facilitate SMT were used as needed and included light soft-tissue techniques (that is, active and passive muscle stretching and ischemic compression of tender points) and hot or cold packs. To facilitate adherence to HEA, chiropractors asked about patients adherence, reaffirmed main HEA messages, and answered questions as needed. HEA Group Home exercise and advice were delivered in four 1-hour, one-on-one visits during the 12-week intervention. The main program goals were to provide patients with the tools to manage existing pain, prevent pain recurrences, and facilitate engagement in daily activities. Instruction and practice were provided for positioning and stabilization exercises to enhance mobility and increase trunk endurance. These were individualized to patients lifestyles, clinical characteristics (including positional sensitivities), and fitness levels. Positioning exercises included extension and flexion motion cycles (patients were encouraged to perform 25 repetitions 3 times per day in the lying, standing, or seated position) (33, 40). Stabilization exercises included pelvic tilt, quadruped, bridging, abdominal curl-ups, and side bridging with positional variations appropriate to patients tolerance and abilities (41). Patients were instructed to do 8 to 12 repetitions of each stabilization exercise every other day. Patients were also instructed in methods for developing spine posture awareness related to their activities of daily living, such as lifting, pushing and pulling, sitting, and getting out of bed (42). Information about simple pain-management techniques, including cold, heat, and movement, was also provided. Printed materials were distributed to take home and review. They included instructions of exercises with photos and a modification of the Back in Action book (43), emphasizing movement and restoration of normal function and fitness (35, 44). To facilitate adherence to HEA, providers called or e-mailed patients 3 times (at 1, 4, and 9 weeks) to reaffirm main messages and answer exercise-related questions. Outcomes and Measurements Patients demographic and clinical characteristics were collected at their first baseline visit through self-report questionnaires, histories, and physical examinations. Self-reported outcomes were collected at the baseline visit and at 3, 12, 26, and 52 weeks via questionnaires independent of study personnel influence. Patients were queried in each questionnaire about attempts to influence their responses. The primary outcome measure was patient-rated typical level of leg pain during the past week using an 11-point numerical rating scale, a reliable, valid, and important patient-centered outcome (36, 4547). The primary end points were 12 weeks, which was the end of the intervention phase, and the 52-week follow-up. A complete description of all secondary outcome measures is provided elsewhere (29). The measures reported in this article include LBP, disability measured with the modified RolandMorris Disability Questionnaire (4850), physical and |
Conceptions of intelligence and learning and perceptions of self-efficacy among students at the end of primary school | AbstractSeveral studies have focused on the role of students’ conceptions of intellectual ability, the learning process and the self-efficacy. However, these three parameters have not been integrated and analysed within the same model. Against this background, the objective of our research is to identify the links that develop between these three constructs. To this end, we have put forward two hypotheses: 1. The conception that intelligence is constructed directly and positively affects the conception that learning is a constructive process. This has a positive impact on perceived self-efficacy. 2. The conception that intelligence is a fixed trait directly and positively affects the conception that learning is a reproductive process and leads to lower perceptions of self-efficacy. To test these hypotheses, we conducted research using a questionnaire distributed among 1112 students in their last year of primary school. The questionnaires were subject to statistical analysis using structural equation model... |
A Low-voltage CMOS Op Amp with a Rail-to-rail Constant-gm Input Stage and a Class AB Rail-to-rail Output Stage | Absfrucf In this paper a low-voltage two-stage O p Amp is presented. The O p Amp features rail-to-rail operation and has an @put stage with a constant transconductance (%) over the entire common-mode input range. The input stage consists of an nand a PMOS differential pair connected in parallel. The constant gm is accomplished by regulating the tail-currents with the aid of an MOS translinear (MTL) circuit. The resulting gn is constant within 5% |
Dynamics of Consumer Demand for New Durable Goods ∗ | Most new consumer durable goods experience rapid declines in prices and improvements in quality, suggesting the importance of modeling dynamics. This paper estimates a dynamic model of consumer preferences for new durable goods with persistent heterogeneous consumer tastes, rational expectations and repeat purchases over time. We estimate the model on the digital camcorder industry using panel data on prices, sales and characteristics. We find that standard COLIs overstate welfare gain in later periods due to a changing composition of buyers. The one-year industry elasticity in response to a transitory industry-wide price shock is about 25% less than the one-month elasticity. ∗We thank Dan Ackerberg, Victor Aguirregabiria, Ana Aizcorbe, Rabah Amir, Lanier Benkard, Steve Berry, Sofronis Clerides, Simon Gilchrist, Avi Goldfarb, Igal Hendel, Kei Hirano, Firat Inceoglu, Sam Kortum, John Krainer, Ariel Pakes, Minsoo Park, Rob Porter, Jeff Prince, Pasquale Schiraldi, Andy Skrzypacz, Tim Erickson, Mo Xiao and seminar participants at several institutions for helpful comments; Haizhen Lin, Ryan Murphy, Kathleen Nosal, David Rapson, Alex Shcherbakov, and Shirley Wong for research assistance; and the NPD Group, ICR-CENTRIS, Ali Hortaçsu and Jeff Prince for providing data. The comments of the editor and anonymous referees substantially improved the paper. We acknowledge funding from the National Science Foundation. All errors are our own. |
Simulation of various natural phenomena based on computational fluid dynamics | Visual simulation of natural phenomena has become one of the most important research topics in computer graphics. Such phenomena include water, fire, smoke, clouds, and so on. Recent methods for the simulation of these phenomena utilize techniques developed in computational fluid dynamics. In this paper, the basic equations (Navier-Stokes equations) for simulating these phenomena are briefly described. These basic equations are used to simulate various natural phenomena. This paper then explains our applications of the equations for simulations of smoke, clouds, and aerodynamic sound. |
Deep Architectures for Automated Seizure Detection in Scalp EEGs | Automated seizure detection using clinical electroencephalograms is a challenging machine learning problem because the multichannel signal often has an extremely low signal to noise ratio. Events of interest such as seizures are easily confused with signal artifacts (e.g, eye movements) or benign variants (e.g., slowing). Commercially available systems suffer from unacceptably high false alarm rates. Deep learning algorithms that employ high dimensional models have not previously been effective due to the lack of big data resources. In this paper, we use the TUH EEG Seizure Corpus to evaluate a variety of hybrid deep structures including Convolutional Neural Networks and Long Short-Term Memory Networks. We introduce a novel recurrent convolutional architecture that delivers 30% sensitivity at 7 false alarms per 24 hours. We have also evaluated our system on a held-out evaluation set based on the Duke University Seizure Corpus and demonstrate that performance trends are similar to the TUH EEG Seizure Corpus. This is a significant finding because the Duke corpus was collected with different instrumentation and at different hospitals. Our work shows that deep learning architectures that integrate spatial and temporal contexts are critical to achieving state of the art performance and will enable a new generation of clinically-acceptable technology. |
Does mindfulness meditation improve anxiety and mood symptoms? A review of the controlled research. | OBJECTIVE
To review the impact of mindfulness-based stress reduction (MBSR) on symptoms of anxiety and depression in a range of clinical populations.
METHOD
Our review included any study that was published in a peer-reviewed journal, used a control group, and reported outcomes related to changes in depression and anxiety. We extracted the following key variables from each of the 15 studies identified: anxiety or depression outcomes after the MBSR program, measurement of compliance with MBSR instructions, type of control group included, type of clinical population studied, and length of follow-up. We also summarized modifications to the MBSR program.
RESULTS
Measures of depression and anxiety were included as outcome variables for a broad range of medical and emotional disorders. Evidence for a beneficial effect of MBSR on depression and anxiety was equivocal. When active control groups were used, MBSR did not show an effect on depression and anxiety. Adherence to the MBSR program was infrequently assessed. Where it was assessed, the relation between practising mindfulness and changes in depression and anxiety was equivocal.
CONCLUSIONS
MBSR does not have a reliable effect on depression and anxiety. |
Libraries, truth and A J Ayer | A LIBRARIAN trying to make up his mind about the role of books, information, and culture in the modern world could do a lot worse than read A J Ayer's recent autobiography, Part of my life. Ayer's wide culture is unforcedly apparent. He grew up in a milieu in which learning the classics and composing Latin verses at Eton were the most natural things in the world: to that extent, he belongs to the world of C S Lewis, with whom he was later to clash at Oxford. His favourite authors range from Dickens, through Yeats, to e e cummings (a lifelong friend); and through people like Cyril Connolly he built up the right connections in Bloomsbury. His logical positivism placed him in touch with the current scientific ethos, and his eager appreciation of facts, of information, placed him in good stead during the war, when he was in military intelligence. In a sense, there can be little worth knowing that Professor Ayer does not know (and, of course, few people worth knowing). |
Links between Natural Variation in the Microbiome and Host Fitness in Wild Mammals. | Recent studies in model organisms have shown that compositional variation in the microbiome can affect a variety of host phenotypes including those related to digestion, development, immunity, and behavior. Natural variation in the microbiome within and between natural populations and species may also affect host phenotypes and thus fitness in the wild. Here, I review recent evidence that compositional variation in the microbiome may affect host phenotypes and fitness in wild mammals. Studies over the last decade indicate that natural variation in the mammalian microbiome may be important in the assistance of energy uptake from different diet types, detoxification of plant secondary compounds, protection from pathogens, chemical communication, and behavior. I discuss the importance of combining both field observations and manipulative experiments in a single system to fully characterize the functions and fitness effects of the microbiome. Finally, I discuss the evolutionary consequences of mammal-microbiome associations by proposing a framework to test how natural selection on hosts is mediated by the microbiome. |
Nasal Mucociliary Clearance in Subjects With COPD After Smoking Cessation. | BACKGROUND
Exposure to cigarette smoke causes significant impairment in mucociliary clearance (MCC), which predisposes patients to secretion retention and recurrent airway infections that play a role in exacerbations of COPD. To determine whether smoking cessation may influence MCC and frequency of exacerbations, the following groups were evaluated: ex-smokers with COPD, smokers with COPD, current smokers with normal lung function, and nonsmokers with normal lung function.
METHODS
Ninety-three subjects were divided into 4 groups: ex-smokers with COPD (n = 23, 62.4 ± 8.0 y, 13 males), smokers with COPD (n = 17, 58.2 ± 8.0 y, 6 males), current smokers (n = 27, 61.5 ± 6.4 y, 17 males), and nonsmokers (n = 26, 60.8 ± 11.3 y, 7 males). MCC was evaluated using the saccharin transit time (STT) test, and the frequency of exacerbations in the last year was assessed by questionnaire. The Kruskal-Wallis test followed by Dunn's test were used to compare STT among groups, and the Goodman test was used to compare the frequency of exacerbations.
RESULTS
STT of smokers with COPD (16.5 [11-28] min; median [interquartile range 25-75%]) and current smokers (15.9 [10-27] min) was longer compared with ex-smokers with COPD (9.7 [6-12] min) and nonsmokers (8 [6-16] min) (P < .001). There was no difference in STT values between smokers with COPD and current smokers, and these values in ex-smokers with COPD were similar to the control group (P > .05). The frequency of exacerbations was lower in ex-smokers with COPD compared with smokers with COPD.
CONCLUSIONS
One year after smoking cessation, subjects with COPD had improved mucociliary clearance. |
A 0 . 3 – 25-GHz Ultra-Wideband Mixer Using Commercial 0 . 18-m CMOS Technology | An ultra-wideband mixer using standard complementery metal oxide semiconductor (CMOS) technology was first proposed in this paper. This broadband mixer achieves measured conversion gain of 11 1 5 dB with a bandwidth of 0.3 to 25 GHz. The mixer was fabricated in a commercial 0.18m CMOS technology and demonstrated the highest frequency and bandwidth of operation. It also presented better gain-bandwidth-product performance compared with that of GaAs-based HBT technologies. The chip area is 0.8 1 mm. |
The potential predictive role of nuclear NHERF1 expression in advanced gastric cancer patients treated with epirubicin/oxaliplatin/capecitabine first line chemotherapy. | Cellular resistance in advanced gastric cancer (GC) might be related to function of multidrug resistance (MDR) proteins. The adaptor protein NHERF1 (Na(+)/H(+) exchanger regulatory factor) is an important player in cancer progression for a number of solid malignancies, even if its role to develop drug resistance remains uncertain. Herein, we aimed to analyze the potential association between NHERF1 expression and P-gp, sorcin and HIF-1α MDR-related proteins in advanced GC patients treated with epirubicin/oxaliplatin/capecitabine (EOX) chemotherapy regimen, and its relation to response. Total number of 28 untreated patients were included into the study. Expression and subcellular localization of all proteins were assessed by immunohistochemistry on formalin-fixed paraffin embedded tumor samples. We did not found significant association between NHERF1 expression and the MDR-related proteins. A trend was observed between positive cytoplasmic NHERF1 (cNHERF1) expression and negative nuclear HIF-1α (nHIF-1α) expression (68.8% versus 31.3% respectively, P = 0.054). However, cytoplasmic P-gp (cP-gp) expression was positively correlated with both cHIF-1α and sorcin expression (P = 0.011; P = 0.002, respectively). Interestingly, nuclear NHERF1 (nNHERF1) staining was statistically associated with clinical response. In detail, 66.7% of patients with high nNHERF1 expression had a disease control rate, while 84.6% of subjects with negative nuclear expression of the protein showed progressive disease (P = 0.009). Multivariate analysis confirmed a significant correlation between nNHERF1 and clinical response (OR 0.06, P = 0.019). These results suggest that nuclear NHERF1 could be related to resistance to the EOX regimen in advanced GC patients, identifying this marker as a possible independent predictive factor. |
Carbon Nanotube-Based Electrochemical Sensors: Principles and Applications in Biomedical Systems | Carbon nanotubes (CNTs) have received considerable attention in the field of electrochemical sensing, due to their unique structural, electronic and chemical properties, for instance, unique tubular nanostructure, large specific surface, excellent conductivity, modifiable sidewall, high conductivity, good biocompatibility, and so on. Here, we tried to give a comprehensive review on some important aspects of the applications of CNT-based electrochemical sensors in biomedical systems, including the electrochemical nature of CNTs, the methods for dispersing CNTs in solution, the approaches to the immobilization of functional CNT sensing films on electrodes, and the extensive biomedical applications of the CNT-based electrochemical sensors. In the last section, we mainly focused on the applications of CNT-based electrochemical sensors in the analysis of various biological substances and drugs, the methods for constructing enzyme-based electrochemical biosensors and the direct electron transfer of redox proteins on CNTs. Because several crucial factors (e.g., the surface properties of carbon nanotubes, the methods for constructing carbon nanotube electrodes and the manners for electrochemical sensing applications) predominated the analytical performances of carbon nanotube electrodes, a systematical comprehension of the related knowledge was essential to the acquaintance, mastery and development of carbon nanotube-based electrochemical sensors. |
Biochemical, pharmacological, and phase I clinical evaluation of pseudoisocytidine. | Pseudoisocytidine (psi ICyd) is a C-nucleoside with enhanced stability and resistance to enzymatic deamination when compared to 5-azacytidine and 1-beta-D-arabinofuranosylcytosine. Elimination kinetics in plasma using [14C]psi ICyd showed a beta-phase for t1/2 for 14C of 2 hr and a beta-phase t1/2 of unchanged psi ICyd of 1.5 hr. Net recovery of radioactivity in urine over 24 hr varied between 40 and 80% of the administered dose; 50 to 90% was unchanged drug and the rest was pseudouridine. Human leukemic cells in vitro deaminated psi ICyd very slowly, formed appreciable quantities of pseudoisocytidine triphosphate, and incorporated small amounts into RNA and DNA. Clinical trials were done using a daily i.v. injection for 5 consecutive days. Hematological or intestine toxicities were not seen, nor was depression of white blood cell count observed in leukemic patients. Hepatic toxicity proved to be dose limiting; this was characterized by an early phase with elevation of prothrombin time and aspartate aminotransferase. A later phase with cirrhosis was observed in two patients. Autopsy showed massive hepatic necrosis in patients dying of acute toxicity and micronodular cirrhosis in one patient dying with the chronic form. |
Dynamic analysis of Clavel's delta parallel robot | Some iterative matrix relations for the geometric, kinematic and dynamic analysis of a Delta parallel robot are established in this paper. The prototype of this manipulator is a three degree of freedom spatial mechanism, which consists of a system of parallel chains. Supposing that the position and the translation motion of the platform are known, an inverse dynamic problem is solved using the virtual powers method. Finally, some recursive matrix relations and some graphs for the moments and the powers of the three active couples are determined. |
Application of Taguchi Method for Optimizing Turning Process by the effects of Machining Parameters | This paper reports on an optimization of turning process by the effects of machining parameters applying Taguchi methods to improve the quality of manufactured goods, and engineering development of designs for studying variation. EN24 steel is used as the work piece material for carrying out the experimentation to optimize the Material Removal Rate. The bars used are of diameter 44mm and length 60mm. There are three machining parameters i.e. Spindle speed, Feed rate, Depth of cut. Different experiments are done by varying one parameter and keeping other two fixed so maximum value of each parameter was obtained. Operating range is found by experimenting with top spindle speed and taking the lower levels of other parameters. Taguchi orthogonal array is designed with three levels of turning parameters with the help of software Minitab 15. In the first run nine experiments are performed and material removal rate (MRR) is calculated. When experiments are repeated in second run adain MRR is calculated. Taguchi method stresses the importance of studying the response variation using the signal–to–noise (S/N) ratio, resulting in minimization of quality characteristic variation due to uncontrollable parameter. The metal removal rate was considered as the quality characteristic with the concept of "the larger-the-better". The S/N ratio for the larger-the-better Where n is the number of measurements in a trial/row, in this case, n=1 and y is the measured value in a run/row. The S/N ratio values are calculated by taking into consideration with the help of software Minitab 15. The MRR values measured from the experiments and their optimum value for maximum material removal rate. Every day scientists are developing new materials and for each new material, we need economical and efficient machining. It is also predicted that Taguchi method is a good method for optimization of various machining parameters as it reduces the number of experiments. From the literature survey,it can be seen that there is no work done on EN24 steel. So in this project the turning of EN24 steel is done in order to optimize the turning process parameters for maximizing the material removal rate. |
Semi-supervised distance metric learning for Collaborative Image Retrieval | Learning a good distance metric plays a vital role in many multimedia retrieval and data mining tasks. For example, a typical content-based image retrieval (CBIR) system often relies on an effective distance metric to measure similarity between any two images. Conventional CBIR systems simply adopting Euclidean distance metric often fail to return satisfactory results mainly due to the well-known semantic gap challenge. In this article, we present a novel framework of Semi-Supervised Distance Metric Learning for learning effective distance metrics by exploring the historical relevance feedback log data of a CBIR system and utilizing unlabeled data when log data are limited and noisy. We formally formulate the learning problem into a convex optimization task and then present a new technique, named as “Laplacian Regularized Metric Learning” (LRML). Two efficient algorithms are then proposed to solve the LRML task. Further, we apply the proposed technique to two applications. One direct application is for Collaborative Image Retrieval (CIR), which aims to explore the CBIR log data for improving the retrieval performance of CBIR systems. The other application is for Collaborative Image Clustering (CIC), which aims to explore the CBIR log data for enhancing the clustering performance of image pattern clustering tasks. We conduct extensive evaluation to compare the proposed LRML method with a number of competing methods, including 2 standard metrics, 3 unsupervised metrics, and 4 supervised metrics with side information. Encouraging results validate the effectiveness of the proposed technique. |
A survey of intrusion detection systems for cloud computing environment | Cloud computing is a newly emerged technology, and the rapidly growing field of IT. It is used extensively to deliver Computing, data Storage services and other resources remotely over internet on a pay per usage model. Nowadays, it is the preferred choice of every IT organization because it extends its ability to meet the computing demands of its everyday operations, while providing scalability, mobility and flexibility with a low cost. However, the security and privacy is a major hurdle in its success and its wide adoption by organizations, and the reason that Chief Information Officers (CIOs) hesitate to move the data and applications from premises of organizations to the cloud. In fact, due to the distributed and open nature of the cloud, resources, applications, and data are vulnerable to intruders. Intrusion Detection System (IDS) has become the most commonly used component of computer system security and compliance practices that defends network accessible Cloud resources and services from various kinds of threats and attacks. This paper presents an overview of different intrusions in cloud, various detection techniques used by IDS and the types of Cloud Computing based IDS. Then, we analyze some pertinent existing cloud based intrusion detection systems with respect to their various types, positioning, detection time and data source. The analysis also gives strengths of each system, and limitations, in order to evaluate whether they carry out the security requirements of cloud computing environment or not. We highlight the deployment of IDS that uses multiple detection approaches to deal with security challenges in cloud. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.