title
stringlengths 8
300
| abstract
stringlengths 0
10k
|
---|---|
Live-streaming changes the (video) game | Video games are inherently an active medium, without interaction a video game is benign. Yet there is a growing community of video game spectating that exists on the Internet, at events across the world and, in part, as traditional television broadcasts. In this paper we look at the different communities that have grown around video game spectating, the incentives of all stakeholders and the technologies involved. An interesting part of this phenomenon is its relation to the malleability of activity and passivity; video games are traditionally active but spectatorship brings an element of passivity, whereas television is traditionally passive but interactive television brings an element of activity. We explore this phenomenon based on selected examples and stimulate a discussion around how such understanding from the video game field could be interesting for interactive television. |
Central retinal artery occlusion in Wegener's granulomatosis: a diagnostic dilemma | PURPOSE
To report a case of central retinal artery occlusion (CRAO) in a patient with biopsy-verified Wegener's granulomatosis (WG) with positive C-ANCA.
METHODS
A 55-year-old woman presented with a 3-day history of acute painless bilateral loss of vision; she also complained of fever and weight loss. Examination showed a CRAO in the left eye and angiographically documented choroidal ischemia in both eyes.
RESULTS
The possibility of systemic vasculitis was not kept in mind until further studies were carried out; methylprednisolone pulse therapy was then started. Renal biopsy disclosed focal and segmental necrotizing vasculitis of the medium-sized arteries, supporting the diagnosis of WG, and cyclophosphamide pulse therapy was administered with gradual improvement, but there was no visual recovery.
CONCLUSION
CRAO as presenting manifestation of WG, in the context of retinal vasculitis, is very uncommon, but we should be aware of WG in the etiology of CRAO. This report shows the difficulty of diagnosing Wegener's granulomatosis; it requires a high index of suspicion, and we should obtain an accurate medical history and repeat serological and histopathological examinations. It emphasizes that inflammation of arteries leads to irreversible retinal infarction, and visual loss may occur. |
The Targeting of Advertising | A important question that firms face in advertising is developing effective media strategy. Major improvements in the quality of consumer information and the growth of targeted media vehicles allow firms to precisely target advertising to consumer segments within a market. This paper examines advertising strategy when competing firms can target advertising to different groups of consumers within a market. With targeted advertising, we find that firms advertise more to consumers who have a strong preference for their product than to comparison shoppers who can be attracted to the competition. Advertising less to comparison shoppers can be seen as a way for firms to endogenously increase differentiation in the market. In addition, targeting allows the firm to eliminate “wasted” advertising to consumers whose preferences do not match a product’s attributes. As a result, the targeting of advertising increases equilibrium profits. The model demonstrates how advertising strategies are affected by firms being able to target pricing. Target advertising leads to higher profits, regardless of whether or not the firms have the ability to set targeted prices, and the targeting of advertising can be more valuable for firms in a competitive environment than the ability to target pricing. |
Spiking Neural Networks: Principles and Challenges | Over the last decade, various spiking neural network models have been proposed, along with a similarly increasing interest in spiking models of computation in computational neuroscience. The aim of this tutorial paper is to outline some of the common ground in state-of-the-art spiking neural networks as well as open challenges. |
Effects of 12-month, 2000IU/day vitamin D supplementation on treatment naïve and vitamin D deficient Saudi type 2 diabetic patients | OBJECTIVES
To determine whether 12-month, 2000IU/day vitamin D supplementation cardiometabolically improves treatment naïve type 2 diabetes mellitus (T2DM) Saudi patients with vitamin D deficiency.
METHODS
This 12-month interventional study was conducted at primary health centers in 5 different residential areas in Riyadh, Saudi Arabia between January 2013 and January 2014. Forty-five Saudi T2DM patients were enrolled. Baseline anthropometrics, glycemic, and lipid profiles were measured and repeated after 6 and 12 months. All subjects were provided with 2000IU vitamin D supplements for one year.
RESULTS
Vitamin D deficiency at baseline was 46.7%, 31.8% after 6 months, and 35.6% after 12 months, indicating an overall improvement in the vitamin D status in the entire cohort. Insulin and homeostatic model assessment-insulin resistance (HOMA-IR) after 12 months were significantly lower than a 6 months (p less than 0.05), but comparable to baseline values. Mean levels of triglycerides increased overtime from baseline (1.9±0.01 mmol/l) to 12 months (2.1±0.2 mmol). This modest increase in serum triglycerides was parallel to the insignificant decrease in circulating high-density lipoprotein -cholesterol levels.
CONCLUSION
Twelve-month vitamin D supplementation of 2000IU per day in a cohort of treatment naïve Saudi patients with T2DM resulted in improvement of several cardiometabolic parameters including systolic blood pressure, insulin, and HOMA-IR. Further studies that include a placebo group are suggested to reinforce findings. |
Scrybe : A Blockchain Ledger for Clinical Trials | The recent popularity of cryptocurrencies has highlighted the versatility and applications of a decentralized, public blockchain. Blockchains provide a data structure that can guarantee both the integrity and non-repudiation of data, as well as providing provenance pertaining to such data. Our novel Lightweight Mining (LWM) algorithm provides these guarantees with minimal resource requirements. Our approach to blockchain-based data provenance, paired with the LWM algorithm, provides the legal and ethical framework for auditors to validate clinical trials, expediting the research process, and saving lives and money in the process. Contributions of this paper include the following: we explain how to adapt and apply a novel, blockchain-based provenance system to enhance clinical trial data integrity and nonrepudiation. We explain the key features of the Scrybe system that enable this outcome, and we describe resilience of the system to denial of service attacks and repudiation. We conclude that Scrybe can provide a system and method for secure data provenance for clinical trials, consistent with the legal and ethical requirements for the lifecycle management of such data. |
HAPPY PEOPLE BECOME HAPPIER THROUGH KINDNESS: A COUNTING KINDNESSES INTERVENTION. | We examined the relationship between the character strength of kindness and subjective happiness (Study 1), and the effects of a counting kindnesses intervention on subjective happiness (Study 2). In Study 1, participants were 175 Japanese undergraduate students and in Study 2, participants were 119 Japanese women (71 in the intervention group and 48 in the control group). Results showed that: (a) Happy people scored higher on their motivation to perform, and their recognition and enactment of kind behaviors. (b) Happy people have more happy memories in daily life in terms of both quantity and quality. (c) Subjective happiness was increased simply by counting one's own acts of kindness for one week. (d) Happy people became more kind and grateful through the counting kindnesses intervention. Discussion centers on the importance of kindness in producing subjective happiness. |
Extrafine Beclomethasone/formoterol compared to Fluticasone/salmeterol Combination Therapy in COPD | BACKGROUND
The study evaluated the efficacy of beclomethasone dipropionate/formoterol fumarate (BDP/FF) extrafine combination versus fluticasone propionate/salmeterol (FP/S) combination in COPD patients.
METHODS
The trial was a 12-week multicentre, randomised, double-blind, double dummy study; 419 patients with moderate/severe COPD were randomised to BDP/FF 200/12 μg or FP/S 500/50 μg twice daily. The primary objective was to demonstrate the equivalence between treatments in terms of Transition Dyspnoea Index (TDI) score and the superiority of BDP/FF in terms of change from pre-dose in the first 30 minutes in forced expiratory volume in the first second (FEV1). Secondary endpoints included lung function, symptom scores, symptom-free days and use of rescue medication, St. George's Respiratory Questionnaire, six minute walking test and COPD exacerbations.
RESULTS
BDP/FF was equivalent to FP/S in terms of TDI score and superior in terms of FEV1 change from pre-dose (p < 0.001). There were no significant differences between treatments in secondary outcome measures, confirming overall comparability in terms of efficacy and tolerability. Moreover, a clinically relevant improvement (>4 units) in SGRQ was detected in the BDP/FF group only.
CONCLUSION
BDP/FF extrafine combination provides COPD patients with an equivalent improvement of dyspnoea and a faster bronchodilation in comparison to FP/S.
TRIAL REGISTRATION
ClinicalTrials.gov: NCT01245569. |
Design of an Efficient X-Band Waveguide-Fed Microstrip Patch Array | The design and fabrication of a 10.5 GHz microstrip patch array fed by a waveguide is presented. The gain of this antenna is 29 dB and its efficiency is 65%. Commonly, the efficiency of conventional microstrip arrays at X-band is not more than 50%. This antenna demonstrates the ability to accomplish a very high efficiency at X-band in a simple structure. This is achieved by using a slotted waveguide to feed the planar array. To allow symmetrical feed of this antenna, the waveguide is a center-fed slotted waveguide, the two ends of which are shorted. The design procedure is expatiated, which contains the study and design of the waveguide-fed subarray structure and of the coax-to- waveguide transition structure. For designing this entire antenna, a three-dimensional electromagnetic field simulation software CST Microwave Studio is applied. Good agreement is achieved between measurement and simulation results. |
Barrier impact on organizational learning within complex organizations | Purpose – The purpose of this research is to examine the manner in which employees access, create and share information and knowledge within a complex supply chain with a view to better understanding how to identify and manage barriers which may inhibit such exchanges. Design/methodology/approach – An extensive literature review combined with an in-depth case study analysis identified a range of potential transfer barriers. These in turn were examined in terms of their consistency of impact by an end-to-end process survey conducted within an IBM facility. Findings – Barrier impact cannot be assumed to be uniform across the core processes of the organization. Process performance will be impacted upon in different ways and subject to varying degrees of influence by the transfer barriers. Barrier identification and management must take place at a process rather than at the organizational level. Research limitations/implications – The findings are based, in the main, on an extensive single company study. Although significant in terms of influencing both knowledge and information systems design and management the study/findings have still to be fully replicated across a range of public and private organizations. Originality/value – The deployment of generic information technology and business systems needs to be questioned if they have been designed and implemented to satisfy organizational rather than process needs. |
Using cognitive artifacts to understand distributed cognition | Studies of patient safety have identified gaps in current work including the need for research about communication and information sharing among healthcare providers. They have also encouraged the use of decision support tools to improve human performance. Distributed cognition is the shared awareness of goals, plans, and details that no single individual grasps. Cognitive artifacts are objects such as: schedules, display boards, lists, and worksheets that form part of a distributed cognition. Cognitive artifacts that are related to operating room (OR) scheduling include: the availabilities sheet, master schedule, OR graph, and OR board. All provide a "way in" to understand how teams in the acute care setting dynamically plan and manage the balance between demand for care and the resources available to provide it. This work has import for the way that information technology supports the organization, management, and use of healthcare resources. Better computer-supported cognitive artifacts will benefit patient safety by making teamwork processes, planning, communications, and resource management more resilient. |
A Performance Evaluation of Machine Learning-Based Streaming Spam Tweets Detection | The popularity of Twitter attracts more and more spammers. Spammers send unwanted tweets to Twitter users to promote websites or services, which are harmful to normal users. In order to stop spammers, researchers have proposed a number of mechanisms. The focus of recent works is on the application of machine learning techniques into Twitter spam detection. However, tweets are retrieved in a streaming way, and Twitter provides the Streaming API for developers and researchers to access public tweets in real time. There lacks a performance evaluation of existing machine learning-based streaming spam detection methods. In this paper, we bridged the gap by carrying out a performance evaluation, which was from three different aspects of data, feature, and model. A big ground-truth of over 600 million public tweets was created by using a commercial URL-based security tool. For real-time spam detection, we further extracted 12 lightweight features for tweet representation. Spam detection was then transformed to a binary classification problem in the feature space and can be solved by conventional machine learning algorithms. We evaluated the impact of different factors to the spam detection performance, which included spam to nonspam ratio, feature discretization, training data size, data sampling, time-related data, and machine learning algorithms. The results show the streaming spam tweet detection is still a big challenge and a robust detection technique should take into account the three aspects of data, feature, and model. |
FoodLog: capture, analysis and retrieval of personal food images via web | With the increase of the number of food images on the Internet, we have been developing a food-logging system which has an automated analysis function as a Web application. It can distinguish food images from other images, analyze the food balance, and visualize the log. In this paper, we demonstrate how the performance can be improved by the personalized models. Because our Web application has an interface to review and correct the food analysis results, the generation of the personalized models can be done on-line. Experimental results using two hundred images showed that the extracted image feature vectors differ from user to user but on the other hand the feature vectors and the food balance of each user have a strong correlation. Therefore, the accuracy of the food balance estimation was improved from 37% to 42% on average by the personalized classifier. |
Review of the magnetocaloric effect in manganite materials | A thorough understanding of the magnetocaloric properties of existing magnetic refrigerant materials has been an important issue in magnetic refrigeration technology. This paper reviews a new class of magnetocaloric material, that is, the ferromagnetic perovskite manganites (R1 xMxMnO3, where R 1⁄4 La, Nd, Pr and M 1⁄4 Ca, Sr, Ba, etc.). The nature of these materials with respect to their magnetocaloric properties has been analyzed and discussed systematically. A comparison of the magnetocaloric effect of the manganites with other materials is given. The potential manganites are nominated for a variety of largeand small-scale magnetic refrigeration applications in the temperature range of 100–375K. It is believed that the manganite materials with the superior magnetocaloric properties in addition to cheap materials-processing cost will be the option of future magnetic refrigeration technology. r 2006 Elsevier B.V. All rights reserved. |
Deep Convolution Networks for Compression Artifacts Reduction | Lossy compression introduces complex compression artifacts, particularly blocking artifacts, ringing effects and blurring. Existing algorithms either focus on removing blocking artifacts and produce blurred output, or restore sharpened images that are accompanied with ringing effects. Inspired by the success of deep convolutional networks (DCN) on superresolution [6], we formulate a compact and efficient network for seamless attenuation of different compression artifacts. To meet the speed requirement of real-world applications, we further accelerate the proposed baseline model by layer decomposition and joint use of large-stride convolutional and deconvolutional layers. This also leads to a more general CNN framework that has a close relationship with the conventional Multi-Layer Perceptron (MLP). Finally, the modified network achieves a speed up of 7.5× with almost no performance loss compared to the baseline model. We also demonstrate that a deeper model can be effectively trained with features learned in a shallow network. Following a similar “easy to hard” idea, we systematically investigate three practical transfer settings and show the effectiveness of transfer learning in low-level vision problems. Our method shows superior performance than the state-of-the-art methods both on benchmark datasets and a real-world use case. |
The Semantic Web - ISWC 2003 | The Semantic Network, a component of the Unified Medical Language System® (UMLS), describes core biomedical knowledge consisting of semantic types and relationships. It is a well established, semi-formal ontology in widespread use for over a decade. We expected to “publish” this ontology on the Semantic Web, using OWL, with relatively little effort. However, we ran into a number of problems concerning alternative interpretations of the SN notation and the inability to express some of the interpretations in OWL. We detail these problems, as a cautionary tale to others planning to publish pre-existing ontologies on the Semantic Web, as a list of issues to consider when describing formally concepts in any ontology, and as a collection of criteria for evaluating alternative representations, which could form part of a methodology of ontology development. |
Novel zero-Voltage-transition PWM DC-DC converters | A new family of zero-voltage-switching (ZVS) pulsewidth-modulated (PWM) converters that uses a new ZVS-PWM switch cell is presented in this paper. Except for the auxiliary switch, all active and passive semiconductor devices in the ZVS-PWM converters operate at ZVS turn ON and turn OFF. The auxiliary switch operates at zero-current-switching (ZCS) turns ON and OFF. Besides operating at constant frequency, these new converters have no overvoltage across the switches and no additional current stress on the main switch in comparison to the hard-switching converter counterpart. Auxiliary components rated at very small current are used. The principle of operation, theoretical analysis, and experimental results of the new ZVS-PWM boost converter, rated 1 kW, and operating at 80 kHz, are provided in this paper to verify the performance of this new family of converters. |
A Study of Grayware on Google Play | While there have been various studies identifying and classifying Android malware, there is limited discussion of the broader class of apps that fall in a gray area. Mobile grayware is distinct from PC grayware due to differences in operating system properties. Due to mobile grayware's subjective nature, it is difficult to identify mobile grayware via program analysis alone. Instead, we hypothesize enhancing analysis with text analytics can effectively reduce human effort when triaging grayware. In this paper, we design and implement heuristics for seven main categories of grayware. We then use these heuristics to simulate grayware triage on a large set of apps from Google Play. We then present the results of our empirical study, demonstrating a clear problem of grayware. In doing so, we show how even relatively simple heuristics can quickly triage apps that take advantage of users in an undesirable way. |
Euler number of the compactified Jacobian and multiplicity of rational curves | In this paper we show that the Euler number of the compactified Jacobian of a rational curve C with locally planar singularities is equal to the multiplicity of the δ-constant stratum in the base of a semi-universal deformation of C. In particular, the multiplicity assigned by Yau, Zaslow and Beauville to a rational curve on a K3 surface S coincides with the multiplicity of the normalisation map in the moduli space of stable maps to S. Introduction Let C be a reduced and irreducible projective curve with singular set Σ ⊂ C and let n : C̃ −→ C be its normalisation. The generalised Jacobian JC of C is an extension of JC̃ by an affine commutative group of dimension δ := dimH0(n∗(OC̃)/OC) = ∑ |
Zonisamide monotherapy in a multi-group clinic | OBJECTIVE
Reports on zonisamide monotherapy are limited despite favourable preliminary data, and typically restricted to tertiary referral centres. The goal of this study is to report clinical experience with zonisamide monotherapy in a large, multi-group clinic setting.
METHODS
We reviewed the charts of patients treated with zonisamide monotherapy in the Neurology Department of the Kelsey-Seybold Clinic (Houston, Texas) during an 18-month period. We analysed subgroups of patients who were naive to antiepileptic drug (AED) therapy (Group 1) and those who had previous exposure to AEDs (Group 2).
RESULTS
The study included 54 paediatric and adult patients with a variety of seizure types: 15 patients in Group 1 and 39 patients in Group 2. Mean maintenance zonisamide dosages in the two groups were similar (193 mg/day in Group 1 vs. 218 mg/day in Group 2). Thirty-eight patients (70.4%) continued zonisamide monotherapy, with 7 patients (13.0%) adding a second AED and 9 patients (16.7%) switching to a different drug. Of the 24 patients who became seizure free on zonisamide monotherapy, 11 were on the 100-mg initial dosage. Zonisamide monotherapy was well tolerated.
CONCLUSIONS
Zonisamide monotherapy is safe and effective for a variety of seizure types and may be appropriate as first-line therapy in some cases. |
Efficient and Effective Ultrasound Image Analysis Scheme for Thyroid Nodule Detection | Ultrasound imaging of thyroid gland provides the ability to acquire valuable information for medical diagnosis. This study presents a novel scheme for the analysis of longitudinal ultrasound images aiming at efficient and effective computer-aided detection of thyroid nodules. The proposed scheme involves two phases: a) application of a novel algorithm for the detection of the boundaries of the thyroid gland and b) detection of thyroid nodules via classification of Local Binary Pattern feature vectors extracted only from the area between the thyroid boundaries. Extensive experiments were performed on a set of B-mode thyroid ultrasound images. The results show that the proposed scheme is a faster and more accurate alternative for thyroid ultrasound image analysis than the conventional, exhaustive feature extraction and classification scheme. |
Automated Diagnosis of Plus Disease in Retinopathy of Prematurity Using Deep Convolutional Neural Networks. | Importance
Retinopathy of prematurity (ROP) is a leading cause of childhood blindness worldwide. The decision to treat is primarily based on the presence of plus disease, defined as dilation and tortuosity of retinal vessels. However, clinical diagnosis of plus disease is highly subjective and variable.
Objective
To implement and validate an algorithm based on deep learning to automatically diagnose plus disease from retinal photographs.
Design, Setting, and Participants
A deep convolutional neural network was trained using a data set of 5511 retinal photographs. Each image was previously assigned a reference standard diagnosis (RSD) based on consensus of image grading by 3 experts and clinical diagnosis by 1 expert (ie, normal, pre-plus disease, or plus disease). The algorithm was evaluated by 5-fold cross-validation and tested on an independent set of 100 images. Images were collected from 8 academic institutions participating in the Imaging and Informatics in ROP (i-ROP) cohort study. The deep learning algorithm was tested against 8 ROP experts, each of whom had more than 10 years of clinical experience and more than 5 peer-reviewed publications about ROP. Data were collected from July 2011 to December 2016. Data were analyzed from December 2016 to September 2017.
Exposures
A deep learning algorithm trained on retinal photographs.
Main Outcomes and Measures
Receiver operating characteristic analysis was performed to evaluate performance of the algorithm against the RSD. Quadratic-weighted κ coefficients were calculated for ternary classification (ie, normal, pre-plus disease, and plus disease) to measure agreement with the RSD and 8 independent experts.
Results
Of the 5511 included retinal photographs, 4535 (82.3%) were graded as normal, 805 (14.6%) as pre-plus disease, and 172 (3.1%) as plus disease, based on the RSD. Mean (SD) area under the receiver operating characteristic curve statistics were 0.94 (0.01) for the diagnosis of normal (vs pre-plus disease or plus disease) and 0.98 (0.01) for the diagnosis of plus disease (vs normal or pre-plus disease). For diagnosis of plus disease in an independent test set of 100 retinal images, the algorithm achieved a sensitivity of 93% with 94% specificity. For detection of pre-plus disease or worse, the sensitivity and specificity were 100% and 94%, respectively. On the same test set, the algorithm achieved a quadratic-weighted κ coefficient of 0.92 compared with the RSD, outperforming 6 of 8 ROP experts.
Conclusions and Relevance
This fully automated algorithm diagnosed plus disease in ROP with comparable or better accuracy than human experts. This has potential applications in disease detection, monitoring, and prognosis in infants at risk of ROP. |
The International Trade as the Sole Engine of Growth for an Economy | Can international trade act as the sole engine of growth for an economy? If yes, what are the mechanisms through which trade operates in transmitting permanent growth? This paper answers these questions with two simple two-country models, in which only one country enjoys sustained growth in autarky. The models differ in the assumptions on technical change, which is either labour- or capital-augmenting. In both cases, the stagnant economy imports growth by trading. In the first model, growth is transmitted because of permanent increases in the trade volume. In the alternative framework, the stagnant economy imports sustained growth because its terms of trade permanently improve. |
On reverse-engineering the KUKA Robot Language | Most commercial manufacturers of industrial robots require their robots to be programmed in a proprietary language tailored to the domain – a typical domain-specific language (DSL). However, these languages oftentimes suffer from shortcomings such as controller-specific design, limited expressiveness and a lack of extensibility. For that reason, we developed the extensible Robotics API for programming industrial robots on top of a general-purpose language. Although being a very flexible approach to programming industrial robots, a fully-fledged language can be too complex for simple tasks. Additionally, legacy support for code written in the original DSL has to be maintained. For these reasons, we present a lightweight implementation of a typical robotic DSL, the KUKA Robot Language (KRL), on top of our Robotics API. This work deals with the challenges in reverse-engineering the language and mapping its specifics to the Robotics API. We introduce two different approaches of interpreting and executing KRL programs: tree-based and bytecode-based interpretation. |
Public Review for Knowledge-Defined Networking | The research community has considered in the past the application of Artificial Intelligence (AI) techniques to control and operate networks. A notable example is the Knowledge Plane proposed by D.Clark et al. However, such techniques have not been extensively prototyped or deployed in the field yet. In this paper, we explore the reasons for the lack of adoption and posit that the rise of two recent paradigms: Software-Defined Networking (SDN) and Network Analytics (NA), will facilitate the adoption of AI techniques in the context of network operation and control. We describe a new paradigm that accommodates and exploits SDN, NA and AI, and provide use-cases that illustrate its applicability and benefits. We also present simple experimental results that support, for some relevant use-cases, its feasibility. We refer to this new paradigm as Knowledge-Defined Networking (KDN). |
QFN challenges: Second bond improvement to eliminate the weak stitch (fish tail) failure mechanism on pre plated lead frame | Most of the device's technology has been moving towards the complex and produce of Nano-IC with demands for cheaper cost, smaller size and better thermal and electrical performance. One of the marketable packages is Quad Flat No-Lead (QFN) package. Due to the high demand of miniaturization of electronic products, QFN development becomes more promising, such as the lead frame design with half edge, cheaper tape, shrinkage of package size as to achieve more units per lead frame (cost saving) and etc [1]. The improvement methods in the lead frame design, such as lead frame metal tie bar and half edge features are always the main challenges for QFN package. With reduced the size of metal tie bar, it will fasten the package singulation process, whereas the half edge is designed for the mold compound locking for delamination reduction purpose. This paper specifically will discuss how the critical wire bonding parameters, capillary design and environmental conditions interact each other result to the unstable leads (second bond failures). During the initial evaluation of new package SOT1261 with rough PPF lead frame, several short tails and fish tails observed on wedge bond when applied with the current parameter setting which have been qualified in other packages with same wire size (18um Au wire). These problems did not surface out in earlier qualified devices mainly due to the second bond parameter robustness, capillary designs, lead frame design changes, different die packages, lead frame batches and contamination levels. One of the main root cause been studied is the second bond parameter setting which is not robust enough for the flimsy lead frame. The new bonding methodology, with the concept of low base ultrasonic and high force setting applied together with scrubbing mechanism to eliminate the fish tail bond and also reduce short tail occurrence on wedge. Wire bond parameters optimized to achieve zero fish tail, and wedge pull reading with >4.0gf. Destructive test such as wedge pull test used to test the bonding quality. Failure modes are analyzed using high power optical scope microscope and Scanning Electronic Microscope (SEM). By looking through into all possible root causes, and identifying how the factors are interacting, some efforts on the Design of Experiments (DOE) are carried out and good solutions were implemented. |
The Need for Accurate Geometric and Radiometric Corrections of Drone-Borne Hyperspectral Data for Mineral Exploration: MEPHySTo - A Toolbox for Pre-Processing Drone-Borne Hyperspectral Data | Drone-borne hyperspectral imaging is a new and promising technique for fast and precise acquisition, as well as delivery of high-resolution hyperspectral data to a large variety of end-users. Drones can overcome the scale gap between field and air-borne remote sensing, thus providing high-resolution and multi-temporal data. They are easy to use, flexible and deliver data within cm-scale resolution. So far, however, drone-borne imagery has prominently and successfully been almost solely used in precision agriculture and photogrammetry. Drone technology currently mainly relies on structure-from-motion photogrammetry, aerial photography and agricultural monitoring. Recently, a few hyperspectral sensors became available for drones, but complex geometric and radiometric effects complicate their use for geology-related studies. Using two examples, we first show that precise corrections are required for any geological mapping. We then present a processing toolbox for frame-based hyperspectral imaging systems adapted for the complex correction of drone-borne hyperspectral imagery. The toolbox performs sensorand platform-specific geometric distortion corrections. Furthermore, a topographic correction step is implemented to correct for rough terrain surfaces. We recommend the c-factor-algorithm for geological applications. To our knowledge, we demonstrate for the first time the applicability of the corrected dataset for lithological mapping and mineral exploration. |
Joint sizing and adaptive independent gate control for FinFET circuits operating in multiple voltage regimes using the logical effort method | FinFET has been proposed as an alternative for bulk CMOS in current and future technology nodes due to more effective channel control, reduced random dopant fluctuation, high ON/OFF current ratio, lower energy consumption, etc. Key characteristics of FinFET operating in the sub/near-threshold region are very different from those in the strong-inversion region. This paper first introduces an analytical transregional FinFET model with high accuracy in both sub- and near-threshold regimes. Next, the paper extends the well-known and widely-adopted logical effort delay calculation and optimization method to FinFET circuits operating in multiple voltage (sub/near/super-threshold) regimes. More specifically, a joint optimization of gate sizing and adaptive independent gate control is presented and solved in order to minimize the delay of FinFET circuits operating in multiple voltage regimes. Experimental results on a 32nm Predictive Technology Model for FinFET demonstrate the effectiveness of the proposed logical effort-based delay optimization framework. |
Evolutionary Design of Gate-Level Polymorphic Digital Circuits | A method for the evolutionary design of polymorphic digital combinational circuits is proposed. These circuits are able to perform different functions (e.g. to switch between the adder and multiplier) only as a consequence of the change of a sensitive variable, which can be a power supply voltage, temperature etc. However, multiplexing of standard solutions is not utilized. The evolved circuits exhibit a unique structure composed of multifunctional polymorphic gates considered as building blocks instead. In many cases the area-efficient solutions were discovered for typical tasks of the digital design. We demonstrated that it is useful to combine polymorphic gates and conventional gates in order to obtain the required functionality. |
Modelling Interaction of Sentence Pair with Coupled-LSTMs | Recently, there is rising interest in modelling the interactions of two sentences with deep neural networks. However, most of the existing methods encode two sequences with separate encoders, in which a sentence is encoded with little or no information from the other sentence. In this paper, we propose a deep architecture to model the strong interaction of sentence pair with two coupled-LSTMs. Specifically, we introduce two coupled ways to model the interdependences of two LSTMs, coupling the local contextualized interactions of two sentences. We then aggregate these interactions and use a dynamic pooling to select the most informative features. Experiments on two very large datasets demonstrate the efficacy of our proposed architecture and its superiority to state-ofthe-art methods. |
Validation of Expression Patterns for Nine miRNAs in 204 Lymph-Node Negative Breast Cancers | INTRODUCTION
Although lymph node negative (LN-) breast cancer patients have a good 10-years survival (∼85%), most of them still receive adjuvant therapy, while only some benefit from this. More accurate prognostication of LN- breast cancer patient may reduce over- and under-treatment. Until now proliferation is the strongest prognostic factor for LN- breast cancer patients. The small molecule microRNA (miRNA) has opened a new window for prognostic markers, therapeutic targets and/or therapeutic components. Previously it has been shown that miR-18a/b, miR-25, miR-29c and miR-106b correlate to high proliferation.
METHODS
The current study validates nine miRNAs (miR-18a/b miR-25, miR-29c, miR-106b, miR375, miR-424, miR-505 and let-7b) significantly correlated with established prognostic breast cancer biomarkers. Total RNA was isolated from 204 formaldehyde-fixed paraffin embedded (FFPE) LN- breast cancers and analyzed with quantitative real-time Polymerase Chain Reaction (qPCR). Independent T-test was used to detect significant correlation between miRNA expression level and the different clinicopathological features for breast cancer.
RESULTS
Strong and significant associations were observed for high expression of miR-18a/b, miR-106b, miR-25 and miR-505 to high proliferation, oestrogen receptor negativity and cytokeratin 5/6 positivity. High expression of let-7b, miR-29c and miR-375 was detected in more differentiated tumours. Kaplan-Meier survival analysis showed that patients with high miR-106b expression had an 81% survival rate vs. 95% (P = 0.004) for patients with low expression.
CONCLUSION
High expression of miR-18a/b are strongly associated with basal-like breast cancer features, while miR-106b can identify a group with higher risk for developing distant metastases in the subgroup of Her2 negatives. Furthermore miR-106b can identify a group of patients with 100% survival within the otherwise considered high risk group of patients with high proliferation. Using miR-106b as a biomarker in conjunction to mitotic activity index could thereby possibly save 18% of the patients with high proliferation from overtreatment. |
A heuristic improvement of the Bellman-Ford algorithm | We describe a new shortest paths algorithm. Our algorithm achieves the same O(nm) worst-case time bound as Bellman-Ford algorithm but is superior in practice. 1. Introduction The Bellman-Ford algorithm 1, 4, 7] is a classical algorithm for the single-source shortest paths problem. The algorithm runs in O(nm) time on a graph with n nodes and m arcs. This is the best currently known strongly polynomial bound for the problem (see 6] for the best weakly polynomial bound). In practice, however, the Bellman-Ford algorithm is usually outperformed by the deque algorithm of D'Escopo-Pape 9] and by the two-queue algorithm of Pallottino 8]. The work-case time bounds for these algorithms, however, are worse than those for the Bellman-Ford algorithm. The deque algorithm may take exponential time in the worst case 10] and the two-queue algorithm may take (n 2 m) time. We propose a new topological-scan algorithm for the shortest paths problem. The algorithm combines, in a natural way, the ideas of the Bellman-Ford algorithm with those for nding |
Neuropsychological and Quality of Life Changes Following Unilateral Thalamic Deep Brain Stimulation in Parkinson's Disease: A One-Year Follow-up | The long-term neuropsychological and quality of life (QOL) outcomes of unilateral thalamic deep brain stimulation (DBS) in patients with intractable Parkinson's disease (PD) have not heretofore been described. Six patients diagnosed with PD underwent unilateral DBS implantation into a verified thalamic VIM nucleus target. Participants completed presurgical neuropsychological evaluation and follow-up assessment at approximately one year postsurgery. Compared to their presurgical scores, PD patients exhibited significant improvement on measures of conceptualization, verbal memory, emotional adjustment, and QOL at one-year follow-up. A few nominal declines were observed across the battery of tests. These data provide preliminary support for the long-term neurocognitive safety and QOL improvements following thalamic stimulation in patients with PD. |
Influence of water on the circulation of the West Nile Virus in horses in Southern France | Background West Nile Virus (WNV) affects humans and horses, potentially causing severe neurological manifestations. Recent outbreaks of West Nile fever in horses were reported in Camargue (2000, 2004), Var (2003) and Pyrénées Orientales (2006). The circulation of this virus is strongly influenced by environmental conditions. This study aimed at explaining the circulation of WNV in horses by quantifying water bodies around equine stables using Landsat images. |
Full 3D touchless fingerprint recognition: Sensor, database and baseline performance | One of the fields that still today remains largely unexplored in biometrics is 3D fingerprint recognition. This gap is mainly explained by the lack of scanners capable of acquiring on a touchless, fast, reliable and repeatable way, accurate fingerprint 3D spatial models. As such, full 3D fingerprint data with which to produce research and advance this field is almost nonexistent. If such acquisition process was possible, it could represent the beginning of a real paradigm shift in the way fingerprint recognition is performed. The present paper is a first promising step to address the fascinating challenge of 3D fingerprint acquisition and recognition. It presents a new full 3D touchless fingerprint scanner, a new database with 1,000 3D finger-print models, a new segmentation method based on the additional spatial information provided by the models, and initial baseline verification results. |
Modelling of Boil-Off Gas in LNG Tanks : A Case Study | This paper focuses on the effect of pressure and heat leakages on Boil-off Gas (BOG) in Liquefied Natural Gas (LNG) tanks. The Lee-Kesler-Plocker (LKP) and the Starling modified Benedict-Webb-Rubin (BWRS) empirical models were used to simulate the compressibility factor, enthalpy and hence heat leakage at various pressures to determine the factors that affect the BOG in typical LNG tanks of different capacities. Using a case study data the heat leakage of 140,000kl, 160,00kl, 180,000kl and 200,000kl LNG tanks were analyzed using the LKP and BWRS models. The heat leakage of LNG tanks depends on the structure of tanks, and the small tanks lose heat to the environment due to their large surface area to volume ratio. As the operation pressure was dropped to 200mbar, all four of the LNG tanks’ BOG levels reached 0.05vol%/day. In order to satisfy the BOG design requirement, the operating pressure of the four large LNG tanks in the case study was maintained above 200mbar. Thus, the operating pressure impacts BOG on LNG tanks, but this effect is limited under the extreme high operation pressure. An attempt was made to determine the relationship between the compositions of LNG and BOG; one been combustible and the other non-combustible gases. The main component of combustible gas was methane, and nitrogen was of non-combustible gases. The relationship between BOG and methane compositions was that, as the methane fraction increases in the LNG, the BOG volume also increases. In general, results showed a direct correlation between BOG and operating pressure. The study also found that larger LNG tanks have less BOG; however as the operation pressure is increased the differences in the quantity of BOG among the four tanks decreased. |
Information and Influence Propagation in Social Networks | Research on social networks has exploded over the last decade. To a large extent, this has been fueled by the spectacular growth of social media and online social networking sites, which continue growing at a very fast pace, as well as by the increasing availability of very large social network datasets for purposes of research. A rich body of this research has been devoted to the analysis of the propagation of information, influence, innovations, infections, practices and customs through networks. Can we build models to explain the way these propagations occur? How can we validate our models against any available real datasets consisting of a social network and propagation traces that occurred in the past? ese are just some questions studied by researchers in this area. Information propagation models find applications in viral marketing, outbreak detection, finding key blog posts to read in order to catch important stories, finding leaders or trendsetters, information feed ranking, etc. A number of algorithmic problems arising in these applications have been abstracted and studied extensively by researchers under the garb of influence maximization. is book starts with a detailed description of well-established diffusion models, including the independent cascade model and the linear threshold model, that have been successful at explaining propagation phenomena. We describe their properties as well as numerous extensions to them, introducing aspects such as competition, budget, and time-criticality, among many others. We delve deep into the key problem of influence maximization, which selects key individuals to activate in order to influence a large fraction of a network. Influence maximization in classic diffusion models including both the independent cascade and the linear threshold models is computationally intractable, more precisely #P-hard, and we describe several approximation algorithms and scalable heuristics that have been proposed in the literature. Finally, we also deal with key issues that need to be tackled in order to turn this research into practice, such as learning the strength with which individuals in a network influence each other, as well as the practical aspects of this research including the availability of datasets and software tools for facilitating research. We conclude with a discussion of various research problems that remain open, both from a technical perspective and from the viewpoint of transferring the results of research into industry strength applications. |
Patients Online Registration System : Feasibility and Perceptions | Aim: This study attempts to measure patient and Registration and Admission (R&A) staff satisfaction levels towards the Traditional Queuing Method (TQM) in comparison with a proposed Online Registration System (ORS). This study also investigates patients’ perceptions of the ORS and the feasibility and acceptance of the R&A staff in a healthcare organization. Materials and Methods: A stratified random sampling technique was used to distribute 385 questionnaires among outpatients registration area to gather indicating information and perspectives. Additionally, eleven face-to-face semi-structured interviews with front line hospital workers in the R&A department were conducted using a thematic content analysis approach to analyze the contents and produce results. In order for the researcher to have a direct understanding of the registration processes and activities and to gain a better understanding of the patients’ behaviors and attitudes toward them; a non-participant observation approach was conducted where observational encounters’ notes were taken and then analyzed. Results: It was found that most outpatient population (patients and registration staff) prefer ORS for a range of reasons including time consumption, cost benefit, patient comfort, data sensitivity, effortless, easiness, accuracy, and less errors. On the other hand, around 10% of them chose to go on with the TQM. Their reasons ranged from the unavailability of computer devices or internet connections to their educational backgrounds or physical disabilities. Computing devices and internet availability proved not to be an issue for the successful implementation of the ORS system, as most participants consented to having an internet connection or a device to enter ORS system (91%). Conclusion: Since more than half of the participated patients were unhappy with the TQM at registration desks (59.7%), this dissatisfaction should be addressed by an ORS implementaion that would reduce waiting time, enhance the level of attention, and improve services from frontline staff toward patients’ care. |
Simple Algorithmic Theory of Subjective Beauty , Novelty , Surprise , Interestingness , Attention , Curiosity , Creativity , Art , Science , Music , Jokes | In this summary of previous work, I argue that data becomes temporarily interesting by itself to some selfimproving, but computationally limited, subjective observer once he learns to predict or compress the data in a better way, thus making it subjectively more “beautiful.” Curiosity is the desire to create or discover more non-random, non-arbitrary, “truly novel,” regular data that allows for compression progress because its regularity was not yet known. This drive maximizes “interestingness,” the first derivative of subjective beauty or compressibility, that is, the steepness of the learning curve. It motivates exploring infants, pure mathematicians, composers, artists, dancers, comedians, yourself, and recent artificial systems. |
Tree Edit Distance Learning via Adaptive Symbol Embeddings | Metric learning has the aim to improve classification accuracy by learning a distance measure which brings data points from the same class closer together and pushes data points from different classes further apart. Recent research has demonstrated that metric learning approaches can also be applied to trees, such as molecular structures, abstract syntax trees of computer programs, or syntax trees of natural language, by learning the cost function of an edit distance, i.e. the costs of replacing, deleting, or inserting nodes in a tree. However, learning such costs directly may yield an edit distance which violates metric axioms, is challenging to interpret, and may not generalize well. In this contribution, we propose a novel metric learning approach for trees which we call embedding edit distance learning (BEDL) and which learns an edit distance indirectly by embedding the tree nodes as vectors, such that the Euclidean distance between those vectors supports class discrimination. We learn such embeddings by reducing the distance to prototypical trees from the same class and increasing the distance to prototypical trees from different classes. In our experiments, we show that BEDL improves upon the state-of-the-art in metric learning for trees on six benchmark data sets, ranging from computer science over biomedical data to a natural-language processing data set containing over 300,000 nodes. |
Institutions as the Fundamental Cause of Long-Run Growth | This paper develops the empirical and theoretical case that differences in economic institutions are the fundamental cause of differences in economic development. We first document the empirical importance of institutions by focusing on two 'quasi-natural experiments' in history, the division of Korea into two parts with very different economic institutions and the colonization of much of the world by European powers starting in the fifteenth century. We then develop the basic outline of a framework for thinking about why economic institutions differ across countries. Economic institutions determine the incentives of and the constraints on economic actors, and shape economic outcomes. As such, they are social decisions, chosen for their consequences. Because different groups and individuals typically benefit from different economic institutions, there is generally a conflict over these social choices, ultimately resolved in favor of groups with greater political power. The distribution of political power in society is in turn determined by political institutions and the distribution of resources. Political institutions allocate de jure political power, while groups with greater economic might typically possess greater de facto political power. We therefore view the appropriate theoretical framework as a dynamic one with political institutions and the distribution of resources as the state variables. These variables themselves change over time because prevailing economic institutions affect the distribution of resources, and because groups with de facto political power today strive to change political institutions in order to increase their de jure political power in the future. Economic institutions encouraging economic growth emerge when political institutions allocate power to groups with interests in broad-based property rights enforcement, when they create effective constraints on power-holders, and when there are relatively few rents to be captured by power-holders. We illustrate the assumptions, the workings and the implications of this framework using a number of historical examples. |
Temporally localized contributions to measures of large-scale heart rate variability. | The purpose of this work was to determine the temporal origins of the standard deviation of successive 5-min mean heart period sequences (SDANN) and the power of the ultralow-frequency (ULF) spectral band (<0.0033 Hz). We hypothesized that SDANN and ULF might have their origins in changes in human activity rather than slow oscillatory rhythms. Heart period sequences were obtained from 24-h Holter electrocardiograms of 10 healthy ambulatory subjects. There was no evidence of any persistent oscillation within the ULF band. Using moving 4-h windows in short-time Fourier transforms, we showed that the amplitude of ULF fluctuated markedly, particularly during times bordering sleep. The local ULF amplitude correlated ( r = 0.59 ± 0.09) with large-scale changes in heart period quantified with 2- and 4-h wavelet transforms. Local SDANN also fluctuated, mainly around times of sleep. Although the 24-h SDANN and ULF values correlated highly, there was little correlation between their temporal distributions ( r = 0.10 ± 0.25). The temporal distributions of measures of long-range heart period variability suggest that they reflect changes in human activity levels. |
Ontological User Profile Modeling for Context-Aware Application Personalization | Existing context-aware adaptation techniques are limited in their support for user personalization. There is relatively less developed research involving adaptive user modeling for user applications in the emerging areas of mobile and pervasive computing. This paper describes the creation of a User Profile Ontology for context-aware application personalization within mobile environments. We analyze users’ behavior and characterize users’ needs for context-aware applications. Special emphasis is placed in the ontological modeling of dynamic components for use in adaptable applications. We illustrate the use of the model in the context of a case study, focusing on providing personalized services to older people via smart-device technologies. |
On the Use of Soft-Decision Error-Correction Codes in nand Flash Memory | As technology continues to scale down, NAND Flash memory has been increasingly relying on error-correction codes (ECCs) to ensure the overall data storage integrity. Although advanced ECCs such as low-density parity-check (LDPC) codes can provide significantly stronger error-correction capability over BCH codes being used in current practice, their decoding requires soft-decision log-likelihood ratio (LLR) information. This results in two critical issues. First, accurate calculation of LLR demands fine-grained memory-cell sensing, which nevertheless tends to incur implementation overhead and access latency penalty. Hence, it is critical to minimize the fine-grained memory sensing precision. Second, accurate calculation of LLR also demands the availability of a memory-cell threshold-voltage distribution model. As the major source for memory-cell threshold-voltage distribution distortion, cell-to-cell interference must be carefully incorporated into the model. However, these two critical issues have not been ever addressed in the open literature. This paper attempts to address these open issues. We derive mathematical formulations to approximately model the threshold-voltage distribution of memory cells in the presence of cell-to-cell interference, based on which the calculation of LLRs is mathematically formulated. This paper also proposes a nonuniform memory sensing strategy to reduce the memory sensing precision and, thus, sensing latency while still maintaining good error-correction performance. In addition, we investigate these design issues under the scenario when we can also sense interfering cells and hence explicitly estimate cell-to-cell interference strength. We carry out extensive computer simulations to demonstrate the effectiveness and involved tradeoffs, assuming the use of LDPC codes in 2-bits/cell NAND Flash memory. |
Effect of Patiromer on SerumPotassium Level in Patients With Hyperkalemia and Diabetic Kidney Disease The AMETHYST-DN Randomized Clinical Trial | DESIGN, SETTING, AND PARTICIPANTS Phase 2, multicenter, open-label, dose-ranging, randomized clinical trial (AMETHYST-DN), conducted at 48 sites in Europe from June 2011 to June 2013 evaluating patiromer in 306 outpatients with type 2 diabetes (estimated glomerular filtration rate, 15 to <60 mL/min/1.73 m2 and serum potassium level >5.0 mEq/L). All patients received RAAS inhibitors prior to and during study treatment. |
Fast Nonnegative Matrix/Tensor Factorization Based on Low-Rank Approximation | Nonnegative matrix factorization (NMF) algorithms often suffer from slow convergence speed due to the nonnegativity constraints, especially for large-scale problems. Low-rank approximation methods such as principle component analysis (PCA) are widely used in matrix factorizations to suppress noise, reduce computational complexity and memory requirements. However, they cannot be applied to NMF directly so far as they result in factors with mixed signs. In this paper, low-rank approximation is introduced to NMF (named lraNMF), which is not only able to reduce the computational complexity of NMF algorithms significantly, but also suppress bipolar noise. In fact, the new update rules are typically about times faster than traditional ones of NMF, here is the number of observations and is the low rank of latent factors. Therefore lraNMF is particularly efficient in the case where , which is the general case in NMF. The proposed update rules can also be incorporated into most existing NMF algorithms straightforwardly as long as they are based on Euclidean distance. Then the concept of lraNMF is generalized to the tensor field to perform a fast sequential nonnegative Tucker decomposition (NTD). By applying the proposed methods, the practicability of NMF/NTD is significantly improved. Simulations on synthetic and real data show the validity and efficiency of the proposed approaches. |
Trunk-Branch Ensemble Convolutional Neural Networks for Video-Based Face Recognition | Human faces in surveillance videos often suffer from severe image blur, dramatic pose variations, and occlusion. In this paper, we propose a comprehensive framework based on Convolutional Neural Networks (CNN) to overcome challenges in video-based face recognition (VFR). First, to learn blur-robust face representations, we artificially blur training data composed of clear still images to account for a shortfall in real-world video training data. Using training data composed of both still images and artificially blurred data, CNN is encouraged to learn blur-insensitive features automatically. Second, to enhance robustness of CNN features to pose variations and occlusion, we propose a Trunk-Branch Ensemble CNN model (TBE-CNN), which extracts complementary information from holistic face images and patches cropped around facial components. TBE-CNN is an end-to-end model that extracts features efficiently by sharing the low- and middle-level convolutional layers between the trunk and branch networks. Third, to further promote the discriminative power of the representations learnt by TBE-CNN, we propose an improved triplet loss function. Systematic experiments justify the effectiveness of the proposed techniques. Most impressively, TBE-CNN achieves state-of-the-art performance on three popular video face databases: PaSC, COX Face, and YouTube Faces. With the proposed techniques, we also obtain the first place in the BTAS 2016 Video Person Recognition Evaluation. |
THE AFFECTIVE ESTABLISHMENT AND MAINTENANCE OF VYGOTSKY’S ZONE OF PROXIMAL DEVELOPMENT | Many recent articles, research papers, and conference presentations about Lev Vygotsky’s zone of proximal development (ZPD) emphasize the ‘‘extended’’ version of the ZPD that reflects human emotions and desires. In this essay, Michael G. Levykh expands on the extant literature on the ZPD through developing several new ideas. First, he maintains that there is no need to expand ZPD to include emotions, as its more ’’conservative’’ dimensions (cognitive, social, cultural, and historical) already encompass affective features. Second, Levykh emphasizes that an emotionally positive collaboration between teachers and students in a caring and nurturing environment must be created from the outset. Finally, he asserts that culturally developed emotions must mediate successful establishment and maintenance of the ZPD in order to be effective. According to Levykh, Vygotsky’s notion that learning can lead development represents a crucial contribution to our understanding of teaching and learning by clearly showing that emotions are vital to human learning and development. Some have argued that Lev Vygotsky’s concept of the zone of proximal development (ZPD) can be extended to integrate the affective dimension. There has been an enormous proliferation of articles, research papers, and conference presentations about the ZPD in fields such as mathematics, science, computer science and technology, language acquisition and teaching, and so on; many of these emphasize the ‘‘extended’’ version of the ZPD that reflects human emotions and desires. The popularity of an extended version of the ZPD, especially among teachers of the physical sciences, supports Vygotsky’s notion that learning can lead development and reflects an overdue acknowledgment and recognition of the fact that emotions are a vital part of human learning and development. This essay expands on the extant literature on ZPD through exploring several new ideas: 1. The more ‘‘conservative’’ features of ZPD (social, cultural, and historical) already encompass both affective and cognitive dimensions; hence, there is no need for an ‘‘extended’’ version of the ZPD. 2. An emotionally positive collaboration and cooperation between teachers and students in a caring and nurturing environment must be created from the outset. 1. See, for example, Holbrook Mahn and Vera John-Steiner, ‘‘The Gift of Confidence: AVygotskian View of Emotions,’’ in Learning for Life in the Twenty-First Century: Sociocultural Perspectives on the Future of Education, eds. Gordon Wells and Guy Claxton (Cambridge, Massachusetts: Blackwell, 2002); and Peter Nelmes, ‘‘Developing a Conceptual Framework for the Role of the Emotions in the Language of Teaching and Learning’’ (paper presented at the third conference of the European Society for Research in Mathematics Education, February-March 2003, Bellaria, Italy), http://www.dm.unipi.it/~didattica/CERME3/ proceedings/tableofcontents_cerme3.html. EDUCATIONAL THEORY j Volume 58 j Number 1 j 2008 2008 Board of Trustees j University of Illinois 83 |
Socioeconomic predictors and consequences of depression among primary care attenders with non-communicable diseases in the Western Cape, South Africa: cohort study within a randomised trial | BACKGROUND
Socioeconomic predictors and consequences of depression and its treatment were investigated in 4393 adults with specified non-communicable diseases attending 38 public sector primary care clinics in the Eden and Overberg districts of the Western Cape, South Africa.
METHODS
Participants were interviewed at baseline in 2011 and 14 months later, as part of a randomised controlled trial of a guideline-based intervention to improve diagnosis and management of chronic diseases. The 10-item Center for Epidemiologic Studies Depression Scale (CESD-10) was used to assess depression symptoms, with higher scores representing more depressed mood.
RESULTS
Higher CESD-10 scores at baseline were independently associated with being less educated (p = 0.004) and having lower income (p = 0.003). CESD-10 scores at follow-up were higher in participants with less education (p = 0.010) or receiving welfare grants (p = 0.007) independent of their baseline scores. Participants with CESD-10 scores of ten or more at baseline (56 % of all participants) had 25 % higher odds of being unemployed at follow-up (p = 0.016), independently of baseline CESD-10 score and treatment status. Among participants with baseline CESD-10 scores of ten or more, antidepressant medication at baseline was independently more likely in participants who had more education (p = 0.002), higher income (p < 0.001), or were unemployed (p = 0.001). Antidepressant medication at follow up was independently more likely in participants with higher income (p = 0.023), and in clinics with better access to pharmacists (p = 0.053) and off-site drug delivery (p = 0.013).
CONCLUSIONS
Socioeconomic disadvantage appears to be both a cause and consequence of depression, and may also be a barrier to treatment. There are opportunities for improving the prevention, diagnosis and treatment of depression in primary care in inequitable middle income countries like South Africa.
TRIAL REGISTRATION
The trial is registered with Current Controlled Trials ( ISRCTN20283604 ). |
Cooperative learning in neural networks using particle swarm optimizers | This paper presents a method to employ particle swarms optim izers in a cooperative configuration. This is achieved by splitting the input vector into several sub-vectors, each w hich is optimized cooperatively in its own swarm. The applic ation of this technique to neural network training is investigate d, with promising results. |
Optical Characteristics of Solution Processed MoO2/ZnO Quantum Dots Based Thin Film Transitor | The present work reports the ultra-violet (UV) and electrical characteristics of solution processed MoO2/ZnO Quantum Dot (QD) based thin film transistor, ZnO QDs thin films are deposited using low-cost solution processing over the Al2O3/p-Si substrates. A very thick layer of Al2O3 (250 nm) is deposited using e-beam evaporation (10-6 mbar). MoO2 is deposited using the thermal evaporation over the ZnO thin film and the achieved thickness is of 25nm. Top and bottom Al contacts were deposited using thermal evaporation and of 50nm and 80nm respectively. Device is characterized in the form of back gate ZnO Thin film transistor (TFT), illuminated from the front side. The UV detection property of the device is performed under the illumination of UV lamp (Puv = 650µW) with wavelength of 365nm. The results obtained with the Vds versus Ids plot shows the significant improvement in the drain current with UV exposure. |
The Broadband Spiral Antenna Design Based on Hybrid Backed-Cavity | Here, a hybrid backed-cavity with electromagnetic band-gap (EBG) structure and a perfect electronic conductor (PEC) is proposed for an Archimedean spiral antenna. This cavity makes the spiral antenna work operate over the 10:1 bandwidth and without the loss introduced by absorbing materials that are used conventionally to broadband spirals. Based on the artificial magnetic conductor (AMC) characteristic, an EBG is placed in the outer region of backed-cavity to improve the blind spot gain at low frequency. A PEC at the center of the structure is used to obtain high gain at high frequency. The antenna performances are improved significantly for of the low profile spiral antenna is improved significantly. A typical spiral antenna with a hybrid backed cavity is fabricated and studied experimentally. The experimental data are consistent with that of the simulation results. |
Improving telemonitoring of heart failure patients with NFC technology | Patients suffering from chronic diseases like congestive heart failure (CHF) can be supported in their selfmanagement process by utilizing telemedicine services based on standard IT and mobile phone infrastructure. Continuous transmission of self measurements of blood pressure, body weight and other health related parameters to a monitoring centre allows the attending physician to monitor those data and to guide the patient to the best possible health status. In such concepts, the most challenging part is still the human computer interface, i.e. to support the user with an adequate system to transmit the self measurements. The objective of this paper is to present a new kind of patient terminal based on mobile phones in combination with the now available near field communication (NFC) technology. This concept provides an intuitive and easy-to-use way to acquire and transmit health related data just by touching medical measurement devices with NFC enabled mobile phones. |
Teaching Virtualization by Building a Hypervisor | Virtual machines (VMs) are an increasingly ubiquitous feature of modern computing, yet the interested student or professional has limited resources to learn how VMs work. In particular, there is a lack of "hands-on" exercises in constructing a virtual machine monitor (VMM, or hypervisor), which are both simple enough to understand completely but realistic enough to capture the practical challenges in using this technology. This paper describes a set of assignments to extend a small, pedagogical operating system (OS) to form a hypervisor and host itself. This pedagogical hypervisor, called HOSS, adds roughly 1,000 lines of code to the MIT JOS source, and includes a set of guided exercises. Initial results with HOSS in an upper-level virtualization course indicate that students enjoyed the assignments and were able to apply what they learned to solve different virtualization-related problems. HOSS is publicly available. |
Near-Duplicates Detection and Elimination Based on Web Provenance for Effective Web Search | Users of World Wide Web utilize search engines for information retrieval in web as search engines play a vital role in finding information on the web. However, the performance of a web search is greatly affected by flooding of search results with information that is redundant in nature i.e., existence of nearduplicates. Such near-duplicates holdup the other promising results to the users. Many of these near-duplicates are from distrusted websites and/or authors who host information on web. Such nearduplicates may be eliminated by means of Provenance. Thus, this paper proposes a novel approach to identify such near-duplicates based on provenance. In this approach a provenance model has been built using web pages which are the search results returned by existing search engine. The proposed model combines both content based and trust based factors for classifying the results as original or near-duplicates. Keywords— Web search, Near-duplicates, Provenance, Semantics, |
Social Media Mining for Toxicovigilance: Automatic Monitoring of Prescription Medication Abuse from Twitter | INTRODUCTION
Prescription medication overdose is the fastest growing drug-related problem in the USA. The growing nature of this problem necessitates the implementation of improved monitoring strategies for investigating the prevalence and patterns of abuse of specific medications.
OBJECTIVES
Our primary aims were to assess the possibility of utilizing social media as a resource for automatic monitoring of prescription medication abuse and to devise an automatic classification technique that can identify potentially abuse-indicating user posts.
METHODS
We collected Twitter user posts (tweets) associated with three commonly abused medications (Adderall(®), oxycodone, and quetiapine). We manually annotated 6400 tweets mentioning these three medications and a control medication (metformin) that is not the subject of abuse due to its mechanism of action. We performed quantitative and qualitative analyses of the annotated data to determine whether posts on Twitter contain signals of prescription medication abuse. Finally, we designed an automatic supervised classification technique to distinguish posts containing signals of medication abuse from those that do not and assessed the utility of Twitter in investigating patterns of abuse over time.
RESULTS
Our analyses show that clear signals of medication abuse can be drawn from Twitter posts and the percentage of tweets containing abuse signals are significantly higher for the three case medications (Adderall(®): 23 %, quetiapine: 5.0 %, oxycodone: 12 %) than the proportion for the control medication (metformin: 0.3 %). Our automatic classification approach achieves 82 % accuracy overall (medication abuse class recall: 0.51, precision: 0.41, F measure: 0.46). To illustrate the utility of automatic classification, we show how the classification data can be used to analyze abuse patterns over time.
CONCLUSION
Our study indicates that social media can be a crucial resource for obtaining abuse-related information for medications, and that automatic approaches involving supervised classification and natural language processing hold promises for essential future monitoring and intervention tasks. |
Low RCS Metamaterial Absorber and Extending Bandwidth Based on Electromagnetic Resonances | A low radar cross section (RCS) metamaterial absorber (MMA) with an enhanced bandwidth is presented both numerically and experimentally. The MMA is realized by assembling three simple square loops in a three-layer structure according to the idea of separating electric and magnetic resonances. Different from onelayer MMA, the proposed MMA can effectively couple with the electric and magnetic components of the incident wave in different positions for fixed frequency, while, for different frequencies, it can trap the input power into different dielectric layers and absorb it in the lossy substrate. Experimental results indicate that the MMA exhibits a bandwidth of absorbance above 90% which is 4.25 times as that of one-layer MMA, and 10 dB RCS reduction is achieved over the range of 4.77–5.06 GHz. Moreover, the cell dimensions and total thickness of the MMA are only 0.17λ and 0.015λ, respectively. The low RCS properties of the MMA are insensitive to both polarization and incident angles. |
Modelling and design of a synergy-based actuator for a tendon-driven soft robotic glove | The need for a means of assistance in human grasping, to compensate for weakness or to augment performance, is well documented. An appealing new way of doing so is through soft, wearable robots that work in parallel with the human muscles. In this paper we present the design and modelling of a tendon-driving unit that empowers a wearable, soft glove. Being portability one of our main objectives, we use only 1 motor to move 8 degrees of freedom of the hand. To achieve this we use an underactuation strategy based on the human hand's first postural synergy, which explains alone ≈60% of activities of daily living. The constrains imposed by the underactuation strategy are softened, to allow adaptability during grasping, by placing elastic elements in series with the tendons. A simulation of the dynamic behaviour of the glove on a human hand allows us to quantify the magnitude and distribution of the forces involved during usage. These results are used to guide design choices such as the power of the motor and the stiffness of the springs. The designed tendon-driving unit comprises a DC motor which drives an array of spools dimensioned according to the first postural synergy, an electromechanical clutch to hold the hand in position during static posture and a feeder mechanism to avoid slacking of the tendons around the spool. Finally, the tendon-driving unit is tested to verify that it satisfies motion and force characteristics required to assist its wearer in activities of daily living. |
Tracking deformable objects with unscented Kalman filtering and geometric active contours | Geometric active contours represented as the zero level sets of the graph of a surface have been used very successfully to segment static images. However, tracking involves estimating the global motion of the object and its local deformations as functions of time. Some attempts have been made to use geometric active contours for tracking, but most of these minimize the energy at each frame and do not utilize the temporal coherency of the motion or the deformation. Recently, particle filters for geometric active contours were used for tracking deforming objects. However, the method is computationally very expensive since it requires a large number of particles to approximate the state density. In the present work, we propose to use the unscented Kalman filter together with geometric active contours to track deformable objects in a computationally efficient manner |
New Mapping Technology for Atrial Tachycardias | Prerequisite for succesful radiofrequency catheter ablation of tachycardias is the exact mapping during the electrophysiological study. The new mapping system CARTO allows a three-dimensional color-coded electroanatomic map of impulse propagation using electromagnetic technology. The aim of this study was to determine the feasibility and safety of the new electromagnetic mapping technology CARTO for atrial tachycardias. Results: Electrophysiologic study and CARTO mapping was performed in 38 atrial tachycardias. The mapping procedure took 26 ± 23 min. We created 33 maps within the right atrium and 5 maps within the left atrium with a mean of 74 ± 38 different catheter positions. The mechanism was determined as reentrant in 9, junctional in 1 and focal in 28 tachycardias. In focal tachycardias the tachycardia cycle length (CL) and the total atrial activation time (AT) were clearly different (352 ± 98 ms vs 99 ± 25 ms). Reentrant tachycardias had a comparable CL and AT (236 ± 44 ms vs 240 ± 56 ms). In 83% of the focal tachycardias and in 67% of the reentrant tachycardias, ablation was performed successfully. No complications occured. Conclusion: The electroanatomic mapping system allows high resolution visualization of electrical activity and may therefore improve precision and simplify the determination of the arrhythmogenic substrate during tachycardias for successful catheter ablation. |
Autonomous Battery Exchange of UAVs with a Mobile Ground Base | This paper presents the autonomous battery exchange operation for small scale UAVs, using a mobile ground base that carries a robotic arm and a service station containing the battery exchange mechanism. The goal of this work is to demonstrate the means to increase the autonomy and persistence of robotic systems without requiring human intervention. The design and control of the system and its components are presented in detail, as well as the collaborative software framework used to plan and execute complex missions. Finally, the results of autonomous outdoor experiments are presented, in which the ground rover successfully localizes, retrieves, services, and deploys the landed UAV, proving its capacity to extend and enhance autonomous operations. |
Characteristics and variability of structural networks derived from diffusion tensor imaging | Structural brain networks were constructed based on diffusion tensor imaging (DTI) data of 59 young healthy male adults. The networks had 68 nodes, derived from FreeSurfer parcellation of the cortical surface. By means of streamline tractography, the edge weight was defined as the number of streamlines between two nodes normalized by their mean volume. Specifically, two weighting schemes were adopted by considering various biases from fiber tracking. The weighting schemes were tested for possible bias toward the physical size of the nodes. A novel thresholding method was proposed using the variance of number of streamlines in fiber tracking. The backbone networks were extracted and various network analyses were applied to investigate the features of the binary and weighted backbone networks. For weighted networks, a high correlation was observed between nodal strength and betweenness centrality. Despite similar small-worldness features, binary networks and weighted networks are distinctive in many aspects, such as modularity and nodal betweenness centrality. Inter-subject variability was examined for the weighted networks, along with the test-retest reliability from two repeated scans on 44 of the 59 subjects. The inter-/intra-subject variability of weighted networks was discussed in three levels - edge weights, local metrics, and global metrics. The variance of edge weights can be very large. Although local metrics show less variability than the edge weights, they still have considerable amounts of variability. Weighting scheme one, which scales the number of streamlines by their lengths, demonstrates stable intra-class correlation coefficients against thresholding for global efficiency, clustering coefficient and diversity. The intra-class correlation analysis suggests the current approach of constructing weighted network has a reasonably high reproducibility for most global metrics. |
Mechanisms of intracellular killing of Rickettsia conorii in infected human endothelial cells, hepatocytes, and macrophages. | The mechanism of killing of obligately intracellular Rickettsia conorii within human target cells, mainly endothelium and, to a lesser extent, macrophages and hepatocytes, has not been determined. It has been a controversial issue as to whether or not human cells produce nitric oxide. AKN-1 cells (human hepatocytes) stimulated by gamma interferon, tumor necrosis factor alpha, interleukin 1beta, and RANTES (regulated by activation, normal T-cell-expressed and -secreted chemokine) killed intracellular rickettsiae by a nitric oxide-dependent mechanism. Human umbilical vein endothelial cells (HUVECs), when stimulated with the same concentrations of cytokines and RANTES, differed in their capacity to kill rickettsiae by a nitric oxide-dependent mechanism and in the quantity of nitric oxide synthesized. Hydrogen peroxide-dependent intracellular killing of R. conorii was demonstrated in HUVECs, THP-1 cells (human macrophages), and human peripheral blood monocytes activated with the cytokines. Rickettsial killing in the human macrophage cell line was also mediated by a limitation of the availability of tryptophan in association with the expression of the tryptophan-degrading enzyme indoleamine-2,3-dioxygenase. The rates of survival of all of the cell types investigated under the conditions of activation and infection in these experiments indicated that death of the host cells was not the explanation for the control of rickettsial infection. This finding represents the first demonstration that activated human hepatocytes and, in some cases, endothelium can kill intracellular pathogens via nitric oxide and that RANTES plays a role in immunity to rickettsiae. Human cells are capable of controlling rickettsial infections intracellularly, the most relevant location in these infections, by one or a combination of three mechanisms involving nitric oxide synthesis, hydrogen peroxide production, and tryptophan degradation. |
Emotion Distribution Recognition from Facial Expressions | Most existing facial expression recognition methods assume the availability of a single emotion for each expression in the training set. However, in practical applications, an expression rarely expresses pure emotion, but often a mixture of different emotions. To address this problem, this paper deals with a more common case where multiple emotions are associated to each expression. The key idea is to learn the specific description degrees of all basic emotions for each expression and the mapping from the expression images to the emotion distributions by the proposed emotion distribution learning (EDL) method.The databases used in the experiments are the s-JAFFE database and the s-BU\_3DFE database as they are the databases with explicit scores for each emotion on each expression image. Experimental results show that EDL can effectively deal with the emotion distribution recognition problem and perform remarkably better than the state-of-the-art multi-label learning methods. |
New Direct Torque Control Scheme for BLDC Motor Drives Suitable for EV Applications | This paper proposes a simple and effective scheme for Direct Torque Control (DTC) of Brushless DC Motor. Unlike the traditional DTC where both the torque and flux must to be employed in order to deliver the switching table for the inverter, the proposed DTC scheme utilizes only the torque information. By considering the particular operating principle of the motor, the instantaneous torque can be estimated only by using back EMF ratio in an unified differential equation. The simulation results have shown the good performance of the proposed scheme. It can be seen thus a very suitable technique for EV drives which need fast and precise torque response. |
Design of 64-bit low power parallel prefix VLSI adder for high speed arithmetic circuits | The addition of two binary numbers is the basic and most often used arithmetic operation on microprocessors, digital signal processors and data processing application specific integrated circuits. Parallel prefix adder is a general technique for speeding up binary addition. This method implements logic functions which determine whether groups of bits will generate or propagate a carry. The proposed 64-bit adder is designed using four different types prefix cell operators, even-dot cells, odd-dot cells, even-semi-dot cells and odd-semi-dot cells; it offers robust adder solutions typically used for low power and high-performance design application needs. The comparison can be made with various input ranges of Parallel Prefix adders in terms power, number of transistor, number of nodes. Tanner EDA tool was used for simulating the parallel prefix adder designs in the 250nm technologies. |
The Relationship between Older Adults’ Risk for a Future Fall and Difficulty Performing Activities of Daily Living | Functional status is often defined by cumulative scores across indices of independence in performing basic and instrumental activities of daily living (ADL/IADL), but little is known about the unique relationship of each daily activity item with the fall outcome. The purpose of this retrospective study was to examine the level of relative risk for a future fall associated with difficulty with performing various tasks of normal daily functioning among older adults who had fallen at least once in the past 12 months. The sample was comprised of community-dwelling individuals 70 years and older from the 1984-1990 Longitudinal Study of Aging by Kovar, Fitti, and Chyba (1992). Risk analysis was performed on individual items quantifying 6 ADLs and 7 IADLs, as well as 10 items related to mobility limitations. Within a subsample of 1,675 older adults with a history of at least one fall within the past year, the responses of individuals who reported multiple falls were compared to the responses of participants who had a single fall and reported 1) difficulty with walking and/or balance (FRAIL group, n = 413) vs. 2) no difficulty with walking or dizziness (NDW+ND group, n = 415). The items that had the strongest relationships and highest risk ratios for the FRAIL group (which had the highest probabilities for a future fall) included difficulty with: eating (73%); managing money (70%); biting or chewing food (66%); walking a quarter of a mile (65%); using fingers to grasp (65%); and dressing without help (65%). For the NDW+ND group, the most noteworthy items included difficulty with: bathing or showering (79%); managing money (77%); shopping for personal items (75%); walking up 10 steps without rest (72%); difficulty with walking a quarter of a mile (72%); and stooping/crouching/kneeling (70%). These findings suggest that individual items quantifying specific ADLs and IADLs have substantive relationships with the fall outcome among older adults who have difficulty with walking and balance, as well as among older individuals without dizziness or difficulty with walking. Furthermore, the examination of the relationships between items that are related to more challenging activities and the fall outcome revealed that higher functioning older adults who reported difficulty with the 6 items that yielded the highest risk ratios may also be at elevated risk for a fall. |
Agricultural aid to seed cultivation: An Agribot | Machine intelligence is a developing technology which has made its way to various fields of engineering and technology. Robots are slowly being implemented in the field of agriculture, very soon AgriBots are to take over the agricultural fields and be used for various difficult and tiresome tasks involving agriculture. They have become the inevitable future of agriculture. This paper proposes an idea that will help in effective cultivation of vast areas of land left uncultivated or barren. Numerous farmers are dying during hill farming mainly due to falling from heights, which can be reduced by this technological effort. The proposed work will help in cultivation in remote areas and increase green cover as well as help farmers in harsh environments. The Agricultural Aid to Seed Cultivation (AASC) robot will be an unmanned aerial vehicle equipped with a camera, a digital image processing unit and a seed cultivation unit. A quadcopter is chosen as an aerial vehicle is independent of the form and shape of the ground and is not deterred by these factors while providing high mobility and reliability. The research aims about the new technology which can be suitable for any kind of remote farming. |
DeepContext: Context-Encoding Neural Pathways for 3D Holistic Scene Understanding | 3D context has been shown to be extremely important for scene understanding, yet very little research has been done on integrating context information with deep neural network architectures. This paper presents an approach to embed 3D context into the topology of a neural network trained to perform holistic scene understanding. Given a depth image depicting a 3D scene, our network aligns the observed scene with a predefined 3D scene template, and then reasons about the existence and location of each object within the scene template. In doing so, our model recognizes multiple objects in a single forward pass of a 3D convolutional neural network, capturing both global scene and local object information simultaneously. To create training data for this 3D network, we generate partially synthetic depth images which are rendered by replacing real objects with a repository of CAD models of the same object category1. Extensive experiments demonstrate the effectiveness of our algorithm compared to the state of the art. |
Pharmacokinetics, metabolism, and routes of excretion of intravenous irofulven in patients with advanced solid tumors. | Irofulven is currently in Phase 2 clinical trials against a wide variety of solid tumors and has demonstrated activity in ovarian, prostate, gastrointestinal, and non-small cell lung cancer. The objectives of this study were to determine its pharmacokinetics and route of excretion and to characterize its metabolites in human plasma and urine samples after a 30-min i.v. infusion at a dose of 0.55 mg/kg in patients with advanced solid tumors. Three patients were administered i.v. 100 microCi of [14C]irofulven over a 30-min infusion on day 1 of cycle 1. Serial blood and plasma samples were drawn at 0 (before irofulven infusion) and up to 144 h after the start of infusion. Urine and fecal samples were collected for up to 144 h after the start of infusion. The mean urinary and fecal excretion of radioactivity up to 144 h were 71.2 and 2.9%, respectively, indicating renal excretion was the major route of elimination of [14C]irofulven. The C(max), AUC(0-infinity), and terminal half-life values for total radioactivity were 1130 ng-Eq/ml, 24,400 ng-Eq . h/ml, and 116.5 h, respectively, and the corresponding values for irofulven were 82.7 ng/ml, 65.5 ng . h/ml, and 0.3 h, respectively, suggesting that the total radioactivity in human plasma was a result of the metabolites. Twelve metabolites of irofulven were detected in human urine and plasma by electrospray ionization/tandem mass spectrometry. Among these metabolites, the cyclopropane ring-opened metabolite (M2) of irofulven was found, and seven others were proposed as glucuronide and glutathione conjugates. |
From Word Embeddings to Document Similarities for Improved Information Retrieval in Software Engineering | The application of information retrieval techniques to search tasks in software engineering is made difficult by the lexical gap between search queries, usually expressed in natural language (e.g. English), and retrieved documents, usually expressed in code (e.g. programming languages). This is often the case in bug and feature location, community question answering, or more generally the communication between technical personnel and non-technical stake holders in a software project. In this paper, we propose bridging the lexical gap by projecting natural language statements and code snippets as meaning vectors in a shared representation space. In the proposed architecture, word embeddings are first trained on API documents, tutorials, and reference documents, and then aggregated in order to estimate semantic similarities between documents. Empirical evaluations show that the learned vector space embeddings lead to improvements in a previously explored bug localization task and a newly defined task of linking API documents to computer programming questions. |
Paper on Design and Implementation of Smart Health Care System using IoT Durga | Diagnosis and monitoring of health is a very important task in health care industry. Due to time constraint people are not visiting hospitals, which could lead to lot of health issues in one instant of time. Priorly most of the health care systems have been developed to predict and diagnose the health of the patients by which people who are busy in their schedule can also monitor their health at regular intervals. Many studies have shown that early prediction is the best way to cure health because early diagnosis will help and alert the patients to know the health status. In this paper, we review the various Internet of Things (IoT) enable devices and its actual implementation in the area of health care children’s, monitoring of the patients etc. Further, this paper addresses how different innovations as server, ambient intelligence and sensors can be leveraged in health care context; determines how they can facilitate economies and societies in terms of suitable development. KeywordsInternet of Things (IoT);ambient intelligence; monitoring; innovations; leveraged. __________________________________________________*****_________________________________________________ |
Fast Channel Tracking for Terahertz Beamspace Massive MIMO Systems | The recent concept of beamspace multiple input multiple output (MIMO) with discrete lens array can utilize beam selection to reduce the number of radio-frequency chains (RF) required in terahertz (THz) massive MIMO systems. However, to achieve the capacity-approaching performance, beam selection requires information on a beamspace channel of large size. This is difficult to obtain since the user mobility usually leads to the fast variation of THz beamspace channels, and the conventional real-time channel estimation schemes involve unaffordable pilot overhead. To solve this problem, in this paper, we propose the a priori aided (PA) channel tracking scheme. Specifically, by considering a practical user motion model, we first excavate a temporal variation law of the physical direction between the base station and each mobile user. Then, based on this law and the special sparse structure of THz beamspace channels, we propose to utilize the obtained beamspace channels in the previous time slots to predict the prior information of the beamspace channel in the following time slot without channel estimation. Finally, aided by the obtained prior information, the time-varying beamspace channels can be tracked with low pilot overhead. Simulation results verify that to achieve the same accuracy, the proposed PA channel tracking scheme requires much lower pilot overhead and signal-to-noise ratio (SNR) than the conventional schemes. |
Recursive Coarse-to-Fine Localization for Fast Object Detection | Sliding window (SW) technique is one of common paradigms employed for object detection. However, the computational cost of this approach is so expensive because the detection window is scanned at all possible positions and scales. To overcome this problem, we propose a compact feature together with fast recursive coarse-to-fine object localization strategy. To build a compact feature, we project the Histograms of Oriented Gradient (HOG) features to linear subspace by Principal Component Analysis (PCA). We call this feature as PCA-HOG feature. The exploitation of the PCA-HOG feature not only helps the classifiers run faster but also still maintains the accuracy. In order to further speeding up the localization, we propose a recursive coarse-to-fine refinement to scan image. We scan image in both scale space and multi-resolution space from coarsest to finest resolutions. Only the best obtained hypothesis from the coarser resolution could be passed to finer resolution. Each resolution has its own linear Support Vector Machine (SVM) classifier and PCA-HOG features. Evaluation with INRIA dataset shows that our method achieves a significant speedup compared to standard sliding window and original HOG feature, while even get higher detection accuracy. |
A 12-month, randomized, controlled study to evaluate exposure and cardiovascular risk factors in adult smokers switching from conventional cigarettes to a second-generation electrically heated cigarette smoking system. | This randomized, controlled, forced-switching, open-label, parallel-group study in 97 adult male and female smokers of conventional cigarettes evaluated biomarkers of tobacco smoke exposure and cardiovascular risk factors. After baseline measurements, smokers were either switched to a second-generation electrically heated cigarette smoking system (EHCSS) or continued smoking conventional cigarettes for 12 months. Biomarkers of exposure and cardiovascular risk factors were measured at 0.5, 1, 2, 3, 4, 5, 6, 9, and 12 months. There was a rapid and sustained reduction in all biomarkers of exposure after switching to the EHCSS, with statistically significant reductions from baseline in nicotine equivalents (-18%), plasma cotinine (-16%), total 4-(methylnitrosamino)-1-(3-pyridyl)-1-butanol (-73%), total 1-hydroxypyrene (-53%), urine mutagenicity (-52%), 4-aminobiphenyl hemoglobin adducts (-43%), carboxyhemoglobin AUC7-23 h (-80%), and 3-hydroxypropylmercapturic acid (-35%). These reductions in exposure in the EHCSS group were associated with statistically significant and pathophysiologically favorable changes in several cardiovascular risk factors, including white blood cell count (-0.78 x 10(3)/microL), hemoglobin (-0.16 g/dL), hematocrit (-0.44%), urine 11-dehydrothromboxane B2 (-374 ng/24 h), and high-density lipoprotein cholesterol (+5 mg/dL). |
High speed trot-running: Implementation of a hierarchical controller using proprioceptive impedance control on the MIT Cheetah | This paper presents implementation of a highly dynamic running gait with a hierarchical controller on the |
Towards Detecting Anomalous User Behavior in Online Social Networks | Users increasingly rely on crowdsourced information, such as reviews on Yelp and Amazon, and liked posts and ads on Facebook. This has led to a market for blackhat promotion techniques via fake (e.g., Sybil) and compromised accounts, and collusion networks. Existing approaches to detect such behavior relies mostly on supervised (or semi-supervised) learning over known (or hypothesized) attacks. They are unable to detect attacks missed by the operator while labeling, or when the attacker changes strategy. We propose using unsupervised anomaly detection techniques over user behavior to distinguish potentially bad behavior from normal behavior. We present a technique based on Principal Component Analysis (PCA) that models the behavior of normal users accurately and identifies significant deviations from it as anomalous. We experimentally validate that normal user behavior (e.g., categories of Facebook pages liked by a user, rate of like activity, etc.) is contained within a low-dimensional subspace amenable to the PCA technique. We demonstrate the practicality and effectiveness of our approach using extensive ground-truth data from Facebook: we successfully detect diverse attacker strategies—fake, compromised, and colluding Facebook identities—with no a priori labeling while maintaining low false-positive rates. Finally, we apply our approach to detect click-spam in Facebook ads and find that a surprisingly large fraction of clicks are from anomalous users. |
Using Program Analysis to Improve Database Applications | Applications that interact with database management systems (DBMSs) are ubiquitous. Such database applications are usually hosted on an application server and perform many small accesses over the network to a DBMS hosted on the database server to retrieve data for processing. For decades, the database and programming systems research communities have worked on optimizing such applications from different perspectives: database researchers have built highly efficient DBMSs, and programming systems researchers have developed specialized compilers and runtime systems for hosting applications. However, there has been relatively little work that optimizes database applications by considering these specialized systems in combination and looking for optimization opportunities that span across them. In this article, we highlight three projects that optimize database applications by looking at both the programming system and the DBMS in a holistic manner. By carefully revisiting the interface between the DBMS and the application, and by applying a mix of declarative database optimization and modern program analysis techniques, we show that a speedup of multiple orders of magnitude is possible in real-world applications. |
AskNow: A Framework for Natural Language Query Formalization in SPARQL | Natural Language Query Formalization involves semantically parsing queries in natural language and translating them into their corresponding formal representations. It is a key component for developing question-answering (QA) systems on RDF data. The chosen formal representation language in this case is often SPARQL. In this paper, we propose a framework, called AskNow, where users can pose queries in English to a target RDF knowledge base (e.g. DBpedia), which are first normalized into an intermediary canonical syntactic form, called Normalized Query Structure (NQS), and then translated into SPARQL queries. NQS facilitates the identification of the desire (or expected output information) and the user-provided input information, and establishing their mutual semantic relationship. At the same time, it is sufficiently adaptive to query paraphrasing. We have empirically evaluated the framework with respect to the syntactic robustness of NQS and semantic accuracy of the SPARQL translator on standard benchmark datasets. |
A Survey on Sign Language Recognition | Sign Language (SL) recognition is getting more and more attention of the researchers due to its widespread applicability in many fields. This paper is based on the survey of the current research trends in the field of SL recognition to highlight the current status of different research aspects of the area. Paper also critically analyzed the current research to identify the problem areas and challenges faced by the researchers. This identification is aimed at providing guideline for the future advances in the field. |
On the Approximation Capability of Recurrent Neural Networks | The capability of recurrent neural networks to approximate functions from lists of real vectors to a real vector space is considered. We show the following results: From approximation results in the feedforward case follows that any measurable function can be approximated with a recurrent network arbitrarily well in probability. For lists with entries from a nite alphabet only one neuron is needed in the recurrent part of the network. On the other hand there exist computable functions that cannot be approximated in the maximum norm. This is valid even for functions on unary inputs if they have unlimited range. For limited range this is valid for inputs on an alphabet with at least two elements. But if only the length of a sequence is relevant i.e. sequences with entries from a unary alphabet are considered, any function with limited range can be approximated in the maximum norm. As a special case a sigmoidal recurrent network with one irrational weight can compute any function on unary inputs in linear time. |
Towards Sub-Word Level Compositions for Sentiment Analysis of Hindi-English Code Mixed Text | Sentiment analysis (SA) using code-mixed data from social media has several applications in opinion mining ranging from customer satisfaction to social campaign analysis in multilingual societies. Advances in this area are impeded by the lack of a suitable annotated dataset. We introduce a Hindi-English (Hi-En) code-mixed dataset for sentiment analysis and perform empirical analysis comparing the suitability and performance of various state-of-the-art SA methods in social media. In this paper, we introduce learning sub-word level representations in LSTM (Subword-LSTM) architecture instead of character-level or word-level representations. This linguistic prior in our architecture enables us to learn the information about sentiment value of important morphemes. This also seems to work well in highly noisy text containing misspellings as shown in our experiments which is demonstrated in morpheme-level feature maps learned by our model. Also, we hypothesize that encoding this linguistic prior in the Subword-LSTM architecture leads to the superior performance. Our system attains accuracy 4-5% greater than traditional approaches on our dataset, and also outperforms the available system for sentiment analysis in Hi-En code-mixed text by 18%. |
Towards End-to-End Prosody Transfer for Expressive Speech Synthesis with Tacotron | We present an extension to the Tacotron speech synthesis architecture that learns a latent embedding space of prosody, derived from a reference acoustic representation containing the desired prosody. We show that conditioning Tacotron on this learned embedding space results in synthesized audio that matches the prosody of the reference signal with fine time detail even when the reference and synthesis speakers are different. Additionally, we show that a reference prosody embedding can be used to synthesize text that is different from that of the reference utterance. We define several quantitative and subjective metrics for evaluating prosody transfer, and report results with accompanying audio samples from single-speaker and 44-speaker Tacotron models on a prosody transfer task. |
XRel: a path-based approach to storage and retrieval of XML documents using relational databases | This article describes XRel, a novel approach for storage and retrieval of XML documents using relational databases. In this approach, an XML document is decomposed into nodes on the basis of its tree structure and stored in relational tables according to the node type, with path information from the root to each node. XRel enables us to store XML documents using a fixed relational schema without any information about DTDs and also to utilize indices such as the B+-tree and the R-tree supported by database management systems. Thus, XRel does not need any extension of relational databases for storing XML documents. For processing XML queries, we present an algorithm for translating a core subset of XPath expressions into SQL queries. Finally, we demonstrate the effectiveness of this approach through several experiments using actual XML documents. |
A layout-similarity-based approach for detecting phishing pages | Phishing is a current social engineering attack that results in online identity theft. In a phishing attack, the attacker persuades the victim to reveal confidential information by using web site spoofing techniques. Typically, the captured information is then used to make an illegal economic profit by purchasing goods or undertaking online banking transactions. Although simple in nature, because of their effectiveness, phishing attacks still remain a great source of concern for organizations with online customer services. In previous work, we have developed AntiPhish, a phishing protection system that prevents sensitive user information from being entered on phishing sites. The drawback is that this system requires cooperation from the user and occasionally raises false alarms. In this paper, we present an extension of our system (called DOMAntiPhish) that mitigates the shortcomings of our previous system. In particular, our novel approach leverages layout similarity information to distinguish between malicious and benign web pages. This makes it possible to reduce the involvement of the user and significantly reduces the false alarm rate. Our experimental evaluation demonstrates that our solution is feasible in practice. |
Helicobacter pylori infection inhibits reflux esophagitis by inducing atrophic gastritis | OBJECTIVE:Although it is widely accepted that Helicobacter pylori (H. pylori) infection is an important cause of atrophic gastritis, few studies have examined the relationship between H. pylori-induced atrophic gastritis and the occurrence of reflux esophagitis. The present study was aimed to examine the relationship between H. pylori infection, atrophic gastritis, and reflux esophagitis in Japan.METHODS:A total of 175 patients with reflux esophagitis were compared with sex- and age-matched 175 control subjects. Diagnosis of H. pylori infection was made by gastric mucosal biopsy, rapid urease test, and serum IgG antibodies. Severity of atrophic gastritis was assessed by histology and serum pepsinogen I/II ratio.RESULTS:H. pylori infection was found in 59 (33.7%) patients with reflux esophagitis, whereas it was found in 126 (72.0%) control subjects. The grade of atrophic gastritis was significantly lower in the former than in the latter. Among the H. pylori-positive patients, atrophic gastritis was milder in the patients with reflux esophagitis than in the patients without it.CONCLUSIONS:These findings suggest that most cases of reflux esophagitis in Japan occur in the absence of H. pylori infection and atrophic gastritis, and it may also tend to occur in patients with milder gastritis even in the presence of H. pylori infection. Therefore, H. pylori infection may be an inhibitory factor of reflux esophagitis through inducing atrophic gastritis and concomitant hypoacidity. |
Silicon RFICs for phased arrays | Phased arrays allow electronic scanning of the antenna beam. However, these phased arrays are not widely used due to a high implementation cost. This article discusses the advantages of the RF architecture and the implementation of silicon RFICs for phased-array transmitters/receivers. In addition, this work also demonstrates how silicon RFICs can play a vital role in lowering the cost of phased arrays. |
An Empirical and Analytic Study of Stack vs . Heap Cost for Languages with Closures | It has been proposed that allocating procedure activation records on a garbage collected heap is more e cient than stack allocation. However, previous comparisons of heap vs. stack allocation have been over-simplistic, neglecting (for example) frame pointers, or the better locality of reference of stacks. We present a comprehensive analysis of all the components of creation, access, and disposal of heap-allocated and stack-allocated activation records. Among our results are: Although stack frames are known to have a better cache read-miss rate than heap frames, our simple analytical model (backed up by simulation results) shows that the di erence is too trivial to matter. The cache write-miss rate of heap frames is very high; we show that a variety of miss-handling strategies (exempli ed by speci c modern machines) can give good performance, but not all can. The write-miss policy of the primary cache is much more important than the write-miss policy of the secondary cache. Stacks restrict the exibility of closure representations (for higher-order functions) in important (and costly) ways. The extra load placed on the garbage collector by heap-allocated frames is very small. The demands of modern programming languages make stacks quite complicated to implement e ciently and correctly. Overall, the execution cost of stack-allocated and heap-allocated frames is very similar; but heap frames are simpler to implement and allow very e cient rst-class continuations (call/cc). 1 Garbage-collected frames In a programming language implementation that uses garbage collection, all procedure activation records (frames) can be allocated on the heap. This is quite convenient for higher-order languages (Scheme, ML, etc.) whose \closures" can have inde nite extent, and it is even more convenient for languages with rst-class continuations. One might think that it would be expensive to allocate, at every procedure call, heap storage that becomes garbage on return. But not necessarily [2]: modern generational garbage-collection algorithms[31] can reclaim dead frames extremely e ciently, even cheaper than the one-instruction cost to pop the stack. But there are many other costs involved in creating, accessing, and destroying activation records| whether on a heap or a stack. These costs are summarized in Figure 1, and explained and analyzed in the remainder of the paper. These numbers depend on many assumptions. The most critical assumptions are these: The runtime system in question has static scope, higher order functions, and garbage collection. The only question being investigated is whether there is an activation-record stack in addition to the garbage collection of other objects. The compiler and garbage collector are required to be \safe for space complexity;" that is, statically dead pointers (in the data ow sense) do not keep objects live. There are few side e ects in compiled programs, so that generational garbage collection will be e cient. These assumptions, and many others, will be explained in the rest of the paper. |
Procedural voronoi foams for additive manufacturing | Microstructures at the scale of tens of microns change the physical properties of objects, making them lighter or more flexible. While traditionally difficult to produce, additive manufacturing now lets us physically realize such microstructures at low cost.
In this paper we propose to study procedural, aperiodic microstructures inspired by Voronoi open-cell foams. The absence of regularity affords for a simple approach to grade the foam geometry --- and thus its mechanical properties --- within a target object and its surface. Rather than requiring a global optimization process, the microstructures are directly generated to exhibit a specified elastic behavior. The implicit evaluation is akin to procedural textures in computer graphics, and locally adapts to follow the elasticity field. This allows very detailed structures to be generated in large objects without having to explicitly produce a full representation --- mesh or voxels --- of the complete object: the structures are added on the fly, just before each object slice is manufactured.
We study the elastic behavior of the microstructures and provide a complete description of the procedure generating them. We explain how to determine the geometric parameters of the microstructures from a target elasticity, and evaluate the result on printed samples. Finally, we apply our approach to the fabrication of objects with spatially varying elasticity, including the implicit modeling of a frame following the object surface and seamlessly connecting to the microstructures. |
Abstract rendering: out-of-core rendering for information visualization | Rendering: Out-of-core Rendering for Information Visualization Joseph A. Cottama and Andrew Lumsdainea and Peter Wangb aCREST/Indiana University, Bloomington, IN, USA; bContinuum Analytics, Austin, TX, USA |
Third International Workshop on Model-Driven Product Line Engineering (MDPLE 2011) | MDPLE workshop series focuses on exploring the present and the future of Model-Driven Software Product Line Engineering techniques. The main goal of MDPLE is to bring together researchers and industrial participants in order to discuss current research in Model-Driven Product Line Engineering and to identify emerging research topics. The workshop aims to foster the discussion between experts with background in model-driven engineering and experts from software product line domain.
The third edition of MDPLE is held in conjunction with the Seventh European Conference on Modeling Foundations and Applications (6-9th of June, 2011, Birmingham, UK). |
Biologically-plausible learning algorithms can scale to large datasets | The backpropagation (BP) algorithm is often thought to be biologically implausible in the brain. One of the main reasons is that BP requires symmetric weight matrices in the feedforward and feedback pathways. To address this “weight transport problem” (Grossberg, 1987), two more biologically plausible algorithms, proposed by Liao et al. (2016) and Lillicrap et al. (2016), relax BP’s weight symmetry requirements and demonstrate comparable learning capabilities to that of BP on small datasets. However, a recent study by Bartunov et al. (2018) evaluate variants of target-propagation (TP) and feedback alignment (FA) on MINIST, CIFAR, and ImageNet datasets, and find that although many of the proposed algorithms perform well on MNIST and CIFAR, they perform significantly worse than BP on ImageNet. Here, we additionally evaluate the sign-symmetry algorithm (Liao et al., 2016), which differs from both BP and FA in that the feedback and feedforward weights share signs but not magnitudes. We examine the performance of sign-symmetry and feedback alignment on ImageNet and MS COCO datasets using different network architectures (ResNet-18 and AlexNet for ImageNet, RetinaNet for MS COCO). Surprisingly, networks trained with sign-symmetry can attain classification performance approaching that of BP-trained networks. These results complement the study by Bartunov et al. (2018), and establish a new benchmark for future biologically plausible learning algorithms on more difficult datasets and more complex architectures. This material is based upon work supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF-1231216. ar X iv :1 81 1. 03 56 7v 2 [ cs .L G ] 2 5 N ov 2 01 8 BIOLOGICALLY-PLAUSIBLE LEARNING ALGORITHMS CAN SCALE TO LARGE DATASETS |
Classification of CT brain images based on deep learning networks | While computerised tomography (CT) may have been the first imaging tool to study human brain, it has not yet been implemented into clinical decision making process for diagnosis of Alzheimer's disease (AD). On the other hand, with the nature of being prevalent, inexpensive and non-invasive, CT does present diagnostic features of AD to a great extent. This study explores the significance and impact on the application of the burgeoning deep learning techniques to the task of classification of CT brain images, in particular utilising convolutional neural network (CNN), aiming at providing supplementary information for the early diagnosis of Alzheimer's disease. Towards this end, three categories of CT images (N = 285) are clustered into three groups, which are AD, lesion (e.g. tumour) and normal ageing. In addition, considering the characteristics of this collection with larger thickness along the direction of depth (z) (~3-5 mm), an advanced CNN architecture is established integrating both 2D and 3D CNN networks. The fusion of the two CNN networks is subsequently coordinated based on the average of Softmax scores obtained from both networks consolidating 2D images along spatial axial directions and 3D segmented blocks respectively. As a result, the classification accuracy rates rendered by this elaborated CNN architecture are 85.2%, 80% and 95.3% for classes of AD, lesion and normal respectively with an average of 87.6%. Additionally, this improved CNN network appears to outperform the others when in comparison with 2D version only of CNN network as well as a number of state of the art hand-crafted approaches. As a result, these approaches deliver accuracy rates in percentage of 86.3, 85.6 ± 1.10, 86.3 ± 1.04, 85.2 ± 1.60, 83.1 ± 0.35 for 2D CNN, 2D SIFT, 2D KAZE, 3D SIFT and 3D KAZE respectively. The two major contributions of the paper constitute a new 3-D approach while applying deep learning technique to extract signature information rooted in both 2D slices and 3D blocks of CT images and an elaborated hand-crated approach of 3D KAZE. |
Effectiveness of massage therapy as co-adjuvant treatment to exercise in osteoarthritis of the knee: a randomized control trial. | BACKGROUND
The effectiveness of exercise therapy in the treatment of osteoarthritis of the knee (KOA) is widely evidenced. The current study aims to compare the effectiveness of massage therapy as a co-adjuvant treatment for KOA.
METHODS
A blind, randomized controlled trial design was used. Eighteen women were randomly allocated to two different groups. Group A was treated with massage therapy and an exercise program, and Group B was treated with the exercise program alone. The intervention lasted for 6 weeks. Outcomes were assessed using a verbal analogue scale (VAS), the WOMAC index, and the Get-Up and Go test. Baseline, post-treatment, and 1- and 3- month follow-up data were collected. Values were considered statistically significant at a p < 0.05. The Mann-Whitney U test was applied in order to find out the differences between groups, and to verify the existence of such differences, the Friedman Test for repeated measures complemented with multiple comparisons tests was carried out.
RESULTS
In both groups, significant differences were found in the three variables between the baseline measurement and three months after treatment, with the exception of the WOMAC variable in group B (p=0.064) No significant differences were found between both groups in the WOMAC index (p=0.508) and VAS (p=0.964) variables and the Get-Up and Go test (p=0.691).
CONCLUSION
Combining exercise-based therapy with massage therapy may lead to clinical improvement in patients with KOA. The use of massage therapy combined with exercise as a treatment for gonarthrosis does not seem to have any beneficial effects. |
Real-time self-adaptive deep stereo | Deep convolutional neural networks trained end-toend are the undisputed state-of-the-art methods to regress dense disparity maps directly from stereo pairs. However, such methods suffer from notable accuracy drops when exposed to scenarios significantly different from those seen in the training phase (e.g.real vs synthetic images, indoor vs outdoor, etc). As it is unlikely to be able to gather enough samples to achieve effective training/tuning in any target domain, we propose to perform unsupervised and continuous online adaptation of a deep stereo network in order to preserve its accuracy independently of the sensed environment. However, such a strategy can be extremely demanding regarding computational resources and thus not enabling real-time performance. Therefore, we address this side effect by introducing a new lightweight, yet effective, deep stereo architecture Modularly ADaptive Network (MADNet) and by developing Modular ADaptation (MAD), an algorithm to train independently only sub-portions of our model. By deploying MADNet together with MAD we propose the first ever realtime self-adaptive deep stereo system. |
Low-Cost Wideband Microstrip Antenna Array for 60-GHz Applications | A low-cost single-layer wideband microstrip antenna array is presented for 60-GHz band applications. Each microstrip radiation element is fed by a modified L-shaped probe to enhance the impedance bandwidth. The proposed microstrip antenna array is excited by a novel coplanar waveguide (CPW) feed network, which not only has a simple structure by abandoning the traditional air bridges (wire bonds) above the CPW T-junctions but also provides pairs of broadband differential outputs. Experimentally, two 4 × 4 antenna arrays with different polarizations were designed and fabricated on the double-sided single-layer printed circuit board. The linearly polarized array exhibits an impedance bandwidth ( SWR ≤ 2) of 25.5% and a gain of around 15.2 dBi. The circularly polarized array, employing the same CPW feed network to excite sequentially rotated circularly polarized elements, achieves an impedance bandwidth ( SWR ≤ 2) of 17.8%, a 3-dB axial ratio bandwidth of 15.6%, and a gain of around 14.5 dBi. |
StressAware: An app for real-time stress monitoring on the amulet wearable platform | Stress is the root cause of many diseases and unhealthy behaviors. Being able to monitor when and why a person is stressed could inform personal stress management as well as interventions when necessary. In this work, we present StressAware, an application on the Amulet wearable platform that classifies the stress level (low, medium, high) of individuals continuously and in real time using heart rate (HR) and heart-rate variability (HRV) data from a commercial heart-rate monitor. We developed our stress-detection model using a Support Vector Machine (SVM). We trained and tested our model using data from three sources and had the following preliminary results: PhysioNet, a public physiological database (94.5% accurate with 10-fold cross validation), a field study (100% accurate with 10-fold cross validation) and a lab study (64.3% accurate with leave-one-out cross-validation). Testing the StressAware app revealed a projected battery life of up to 12 days. Also, the usability feedback from subjects showed that the Amulet has a potential to be used by people for monitoring their stress levels. The results are promising, indicating that the app may be used for stress detection, and eventually for the development of stress-related intervention that could improve the health of individuals. |
5 Adaptive Robust Extended Kalman Filter | The extended Kalman filter (EKF) is one of the most widely used methods for state estimation with communication and aerospace applications based on its apparent simplicity and tractability (Shi et al., 2002; Bolognani et al., 2003; Wu et al., 2004). However, for an EKF to guarantee satisfactory performance, the system model should be known exactly. Unknown external disturbances may result in the inaccuracy of the state estimate, even cause divergence. This difficulty has been recognized in the literature (Reif & Unbehauen, 1999; Reif et al., 2000), and several schemes have been developed to overcome it. A traditional approach to improve the performance of the filter is the 'covariance setting' technique, where a positive definite estimation error covariance matrix is chosen by the filter designer (Einicke et al., 2003; Bolognani et al., 2003). As it is difficult to manually tune the covariance matrix for dynamic system, adaptive extended Kalman filter (AEKF) approaches for online estimation of the covariance matrix have been adopted (Kim & ILTIS, 2004; Yu et al., 2005; Ahn & Won, 2006). However, only in some special cases, the optimal estimation of the covariance matrix can be obtained. And inaccurate approximation of the covariance matrix may blur the state estimate. Recently, the robust H∞ filter has received considerable attention (Theodor et al., 1994; Shen & Deng, 1999; Zhang et al., 2005; Tseng & Chen, 2001). The robust filters take different forms depending on what kind of disturbances are accounted for, while the general performance criterion of the filters is to guarantee a bounded energy gain from the worst possible disturbance to the estimation error. Although the robust extended Kalman filter (REKF) has been deeply investigated (Einicke & White, 1999; Reif et al., 1999; Seo et al., 2006), how to prescribe the level of disturbances attenuation is still an open problem. In general, the selection of the attenuation level can be seen as a tradeoff between the optimality and the robustness. In other words, the robustness of the REKF is obtained at the expense of optimality. This chapter reviews the adaptive robust extended Kalman filter (AREKF), an effective algorithm which will remain stable in the presence of unknown disturbances, and yield accurate estimates in the absence of disturbances (Xiong et al., 2008). The key idea of the AREKF is to design the estimator based on the stability analysis, and determine whether the error covariance matrix should be reset according to the magnitude of the innovation. O pe n A cc es s D at ab as e w w w .in te ch w eb .o rg |
Big Data fraud detection using multiple medicare data sources | Introduction Healthcare in the United States (U.S.) is important in the lives of many citizens, but unfortunately the high costs of health-related services leave many patients with limited medical care. In response, the U.S. government has established and funded programs, such as Medicare [1], that provide financial assistance for qualifying people to receive needed medical services [2]. There are a number of issues facing healthcare and Abstract |
A Trench-Gate High-Conductivity IGBT (HiGT) With Short-Circuit Capability | This paper describes a new 600-V trench-gate high-conductivity insulated gate bipolar transistor (trench HiGT) that has both a low collector-emitter saturation voltage of 1.55 V at 200 and a tough short-circuit capability of more than 10 . The trench HiGT also has better tradeoff relationship between turn-off switching loss and collector-emitter saturation voltage compared to either an insulated gate bipolar transistor (IGBT) with a planar gate or a conventional trench gate. A reverse transfer capacitance that is 50% lower than that of the planar-gate IGBT and an input capacitance that is 40% lower than that of a conventional trench gate IGBT have been obtained for the trench HiGT. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.