title
stringlengths
8
300
abstract
stringlengths
0
10k
UNIFYing Cloud and Carrier Network Resources: An Architectural View
Cloud networks provide various services on top of virtualized compute and storage resources. The flexible operation and optimal usage of the underlying infrastructure are realized by resource orchestration methods and virtualization techniques developed during the recent years. In contrast, service deployment and service provisioning in carrier networks have several limitations in terms of flexibility, scalability or optimal resource usage as the built-in mechanisms are strongly coupled to the physical topology and special purpose hardware elements. Network Function Virtualization (NFV) opens the door between cloud and carrier networks by providing software-based telecommunication services which can run in virtualized environment on general purpose hardwares. Our main goal is to unify software and network resources in a common framework. In this paper, we propose a novel architecture supporting automated, dynamic service creation based on a fine-granular service chaining model, SDN and cloud virtualization techniques. First, we introduce the architecture with the main components. Second, the most important benefits are highlighted and compared to other state-of-the-art approaches. Finally, preliminary experiences with our proof-of-concept prototypes are presented.
Treatment of duodenal ulcer with pirenzepine and cimetidine.
The purpose of this single blind controlled multicentre trial was to compare the relative effectiveness of pirenzepine and cimetidine in healing endoscopically proven duodenal ulcers. One hundred and twenty six patients with duodenal ulcer were treated with a daily dose of 100 mg pirenzepine (50 mg each before breakfast and before the evening meal), and 128 patients were treated with 1000 mg cimetidine (200 mg with breakfast, lunch, and evening meal and 400 mg at bedtime). Endoscopy was repeated after four weeks by an endoscopist who had not been informed about the treatment. Pirenzepine showed a healing rate of 64.3%, cimetidine one of 73.4%. This difference is not statistically significant (one-sided test: chi 1(2) = 2.48). After four weeks a higher proportion of first ulcers than of recurrent lesions was healed. Pain relief was rapidly achieved with both drugs. A significant trend in favour of cimetidine may, however, not be clinically relevant considering the small difference in the absolute numbers of pain free days and nights. Adverse effects were rare and reversible. We conclude that the efficacy of pirenzepine is similar to that of cimetidine in healing duodenal ulcers.
Extra- and Intracranial Cerebral Vasculitis in Giant Cell Arteritis
Recognizing giant cell arteritis (GCA) in patients with stroke may be challenging. We aimed to highlight the clinical spectrum and long-term follow-up of GCA-specific cerebrovascular accidents. Medical charts of all patients followed in a French Department of Internal Medicine for GCA between January 2008 and January 2014 were retrospectively reviewed. Patients with cerebrovascular accidents at GCA diagnosis were included. Diagnosis of GCA was based on American College of Rheumatology criteria. Transient ischemic attacks and stroke resulting from an atherosclerotic or cardioembolic mechanism were excluded. Clinical features, GCA-diagnosis workup, brain imaging, cerebrospinal fluid (CSF) study, treatment, and follow-up data were analyzed. From January 2008 to January 2014, 97 patients have been followed for GCA. Among them, 8 biopsy-proven GCA patients (mean age 70±7.8 years, M/F sex ratio 3/1) had stroke at GCA diagnosis. Six patients reported headache and visual impairment. Brain MR angiography showed involvement of vertebral and/or basilar arteries in all cases with multiple or unique ischemic lesions in the infratentorial region of the brain in all but one case. Intracranial cerebral arteries involvement was observed in 4 cases including 2 cases with cerebral angiitis. Long lasting lesions on diffusion-weight brain MRI sequences were observed in 1 case. All patients received steroids for a mean of 28.1±12.8 months. Side effects associated with long-term steroid therapy occurred in 6 patients. Relapses occurred in 4 patients and required immunosuppressive drugs in 3 cases. After a mean follow-up duration of 36.4±16.4 months, all but 1 patient achieved complete remission without major sequelae. The conjunction of headache with vertebral and basilar arteries involvement in elderly is highly suggestive of stroke associated with GCA. Intracranial cerebral arteries involvement with cerebral angiitis associated with long lasting brain lesions on diffusion-weight brain MRI sequences may occur in GCA. Both frequent relapses and steroid-induced side effects argue for the use of immunosuppressive agents combined with steroids as first-line therapy.
Robust and sparse estimation of tensor decompositions
We propose novel tensor decomposition methods that advocate both properties of sparsity and robustness to outliers. The sparsity enables us to extract some essential features from a big data that are easily interpretable. The robustness ensures the resistance to outliers that appear commonly in high-dimensional data. We first propose a method that generalizes the ridge regression in M-estimation framework for tensor decompositions. The other approach we propose combines the least absolute deviation (LAD) regression and the least absolute shrinkage operator (LASSO) for the CANDECOMP/PARAFAC (CP) tensor decompositions. We also formulate various robust tensor decomposition methods using different loss functions. The simulation study shows that our robust-sparse methods outperform other general tensor decomposition methods in the presence of outliers.
Occupancy-driven energy management for smart building automation
Buildings are among the largest consumers of electricity in the US. A significant portion of this energy use in buildings can be attributed to HVAC systems used to maintain comfort for occupants. In most cases these building HVAC systems run on fixed schedules and do not employ any fine grained control based on detailed occupancy information. In this paper we present the design and implementation of a presence sensor platform that can be used for accurate occupancy detection at the level of individual offices. Our presence sensor is low-cost, wireless, and incrementally deployable within existing buildings. Using a pilot deployment of our system across ten offices over a two week period we identify significant opportunities for energy savings due to periods of vacancy. Our energy measurements show that our presence node has an estimated battery lifetime of over five years, while detecting occupancy accurately. Furthermore, using a building simulation framework and the occupancy information from our testbed, we show potential energy savings from 10% to 15% using our system.
A 4 DOF exoskeleton robot with a novel shoulder joint mechanism
At present, medical experts and researchers turn their attention towards using robotic devices to facilitate human limb rehabilitation. An exoskeleton is such a robotic device, which is used to perform rehabilitation, motion assistance and power augmentation tasks. For effective operation, it is supposed to follow the structure and the motion of the natural human limb. This paper propose a robotic rehabilitation exoskeleton with novel shoulder joint actuation mechanism with a moving center of glenohumeral (CGH) joint. The proposed exoskeleton has four active degrees of freedom (DOFs), namely; shoulder flexion/extension, abduction/adduction, pronation/supination (external/internal rotation), and elbow flexion/extension. In addition to those motions mentioned above, three passive DOFs had been introduced to the shoulder joint mechanism in order to provide allowance for the scapular motion of the shoulder. The novel mechanism allows the movement of CGH — joint in two planes; namely frontal plane during shoulder abduction/adduction and transverse plane during flexion/extension. The displacement of the CGH — joint axis was measured experimentally. These results are then incorporated into the novel mechanism, which takes into account the natural movement characteristics of the human shoulder joint. It is intended to reduce excessive stress on patient's upper limb while carrying out rehabilitation exercises.
Dc-bus voltage regulation and power compensation with bi-directional inverter in dc-microgrid applications
This paper presents a single-phase bi-directional inverter with dc-bus voltage regulation and power compensation in dc-microgrid applications. In dc-microgrid applications, a power distribution system requires a bi-directional inverter to control the power flow between dc bus and ac grid, and to regulate the dc bus to a certain range of voltages, in which dc load may change abruptly. This will result in high dc-bus voltage variation. In this paper, we take into account this variation and propose an on-line regulation mechanism according to the inductor current levels to balance power flow and enhance the dynamic performance. Additionally, for power compensation and islanding protection, the bi-directional inverter can shift its current commands according to the specified power factor at ac grid side. Simulated and Experimental results obtained from a 5 kW single-phase bi-directional inverter have verified the analysis and discussion.
Algorithmic motion planning
Motion planning is a fundamental problem in robotics. It comes in a variety of forms, but the simplest version is as follows. We are given a robot system B, which may consist of several rigid objects attached to each other through various joints, hinges, and links, or moving independently, and a 2D or 3D environment V cluttered with obstacles. We assume that the shape and location of the obstacles and the shape of B are known to the planning system. Given an initial placement Z1 and a final placement Z2 of B, we wish to determine whether there exists a collisionavoiding motion of B from Z1 to Z2, and, if so, to plan such a motion. In this simplified and purely geometric setup, we ignore issues such as incomplete information, nonholonomic constraints, control issues related to inaccuracies in sensing and motion, nonstationary obstacles, optimality of the planned motion, and so on. Since the early 1980s, motion planning has been an intensive area of study in robotics and computational geometry. In this chapter we will focus on algorithmic motion planning, emphasizing theoretical algorithmic analysis of the problem and seeking worst-case asymptotic bounds, and only mention briefly practical heuristic approaches to the problem. The majority of this chapter is devoted to the simplified version of motion planning, as stated above. Section 51.1 presents general techniques and lower bounds. Section 51.2 considers efficient solutions to a variety of specific moving systems with a small number of degrees of freedom. These efficient solutions exploit various sophisticated methods in computational and combinatorial geometry related to arrangements of curves and surfaces (Chapter 30). Section 51.3 then briefly discusses various extensions of the motion planning problem such as computing optimal paths with respect to various quality measures, computing the path of a tethered robot, incorporating uncertainty, moving obstacles, and more.
Text information extraction in images and video: a survey
Text data present in images and video contain useful information for automatic annotation, indexing, and structuring of images. Extraction of this information involves detection, localization, tracking, extraction, enhancement, and recognition of the text from a given image. However, variations of text due to differences in size, style, orientation, and alignment, as well as low image contrast and complex background make the problem of automatic text extraction extremely challenging. While comprehensive surveys of related problems such as face detection, document analysis, and image & video indexing can be found, the problem of text information extraction is not well surveyed. A large number of techniques have been proposed to address this problem, and the purpose of this paper is to classify and review these algorithms, discuss benchmark data and performance evaluation, and to point out promising directions for future research.
Pleural fluid accumulation due to intra-abdominal endometriosis: a case report and review of the literature.
A case is presented of massive ascites and right sided pleural effusion caused by endometriosis. The final diagnosis was not made for a considerable time. Massive ascites and a right sided pleural effusion caused by endometriosis is rare, with fewer than 10 reports in the literature worldwide. Physicians should be aware of this potentially tentially treatable cause, having excluded other possibilities such as malignancy and tuberculosis.
Smart Irrigation Using Internet of Things
Automation in agriculture system is very important these days. This paper proposes an automated system for irrigating the fields. ESP-8266 WIFI module chip is used to connect the system to the internet. Various types of sensors are used to check the content of moisture in the soil, and the water is supplied to the soil through the motor pump. IOT is used to inform the farmers of the supply of water to the soil through an android application. Every time water is given to the soil, the farmer will get to know about that.
Effect of Ramadan fasting on metabolic markers, body composition, and dietary intake in Emiratis of Ajman (UAE) with metabolic syndrome
BACKGROUND/AIM The aim of the study was to evaluate the effect of Ramadan fasting on metabolic markers, body composition and dietary intake in native Emiratis of Ajman, UAE with the metabolic syndrome (MS). DESIGN 19 patients (14 Female, 5 Male) aged 37.1 ± 12.5 years, were encouraged healthy lifestyle changes during fasting and data was collected 1 week before and in the fourth week of Ramadan. RESULTS No patients experienced complications or increased symptoms of hypoglycemia during Ramadan. Total energy consumption remained similar. Meal frequency decreased (3.2 ± 0.5 vs 2.1 ± 0.4 meals/day). Protein intake decreased 12% (P = 0.04) but fat intake increased 23% (P = 0.03). Body weight (103.9 ± 29.8 vs 102.1 ± 29.0 kg, P = 0.001) and waist circumference (123 ± 14 vs 119 ± 17 cm, P = 0.001) decreased. Forty percent of patients increased their physical activity due to increased praying hours. Fasting P-glucose (6.3 ± 1.7 vs 6.8 ± 2.0 mmol/L, P = 0.024) and B-HbA(1c) concentrations 6.3 ± 0.9 vs 6.5% ± 0.9%, P = 0.003) increased but P-insulin concentration, HOMA-IR index and lipid concentrations remained unchanged. CONCLUSION The present study investigated the effect of Ramadan fasting on dietary intake, metabolic parameters and body composition showing that the energy consumption per day did not decrease, although the fat intake increased. However, the patients lost weight and reduced their waist circumference. Ramadan fasting has also elicited small but significant increases in Glucose and HbA(1c) after 4 weeks.
Transfer learning for time series classification
Transfer learning for deep neural networks is the process of first training a base network on a source dataset, and then transferring the learned features (the network’s weights) to a second network to be trained on a target dataset. This idea has been shown to improve deep neural network’s generalization capabilities in many computer vision tasks such as image recognition and object localization. Apart from these applications, deep Convolutional Neural Networks (CNNs) have also recently gained popularity in the Time Series Classification (TSC) community. However, unlike for image recognition problems, transfer learning techniques have not yet been investigated thoroughly for the TSC task. This is surprising as the accuracy of deep learning models for TSC could potentially be improved if the model is fine-tuned from a pre-trained neural network instead of training it from scratch. In this paper, we fill this gap by investigating how to transfer deep CNNs for the TSC task. To evaluate the potential of transfer learning, we performed extensive experiments using the UCR archive which is the largest publicly available TSC benchmark containing 85 datasets. For each dataset in the archive, we pre-trained a model and then fine-tuned it on the other datasets resulting in 7140 different deep neural networks. These experiments revealed that transfer learning can improve or degrade the models predictions depending on the dataset used for transfer. Therefore, in an effort to predict the best source dataset for a given target dataset, we propose a new method relying on Dynamic Time Warping to measure inter-datasets similarities. We describe how our method can guide the transfer to choose the best source dataset leading to an improvement in accuracy on 71 out of 85 datasets.
The role of relative entropy in quantum information theory
Quantum mechanics and information theory are among the most important scientific discoveries of the last century. Although these two areas initially developed separately, it has emerged that they are in fact intimately related. In this review the author shows how quantum information theory extends traditional information theory by exploring the limits imposed by quantum, rather than classical, mechanics on information storage and transmission. The derivation of many key results differentiates this review from the usual presentation in that they are shown to follow logically from one crucial property of relative entropy. Within the review, optimal bounds on the enhanced speed that quantum computers can achieve over their classical counterparts are outlined using information-theoretic arguments. In addition, important implications of quantum information theory for thermodynamics and quantum measurement are intermittently discussed. A number of simple examples and derivations, including quantum superdense coding, quantum teleportation, and Deutsch’s and Grover’s algorithms, are also included.
A pilot study of the efficacy of heart rate variability (HRV) biofeedback in patients with fibromyalgia.
UNLABELLED Fibromyalgia (FM) is a non-inflammatory rheumatologic disorder characterized by musculoskeletal pain, fatigue, depression, cognitive dysfunction and sleep disturbance. Research suggests that autonomic dysfunction may account for some of the symptomatology of FM. An open label trial of biofeedback training was conducted to manipulate suboptimal heart rate variability (HRV), a key marker of autonomic dysfunction. METHODS Twelve women ages 18-60 with FM completed 10 weekly sessions of HRV biofeedback. They were taught to breathe at their resonant frequency (RF) and asked to practice twice daily. At sessions 1, 10 and 3-month follow-up, physiological and questionnaire data were collected. RESULTS There were clinically significant decreases in depression and pain and improvement in functioning from Session 1 to a 3-month follow-up. For depression, the improvement occurred by Session 10. HRV and blood pressure variability (BPV) increased during biofeedback tasks. HRV increased from Sessions 1-10, while BPV decreased from Session 1 to the 3 month follow-up. CONCLUSIONS These data suggest that HRV biofeedback may be a useful treatment for FM, perhaps mediated by autonomic changes. While HRV effects were immediate, blood pressure, baroreflex, and therapeutic effects were delayed. This is consistent with data on the relationship among stress, HPA axis activity, and brain function.
Who Owns the Media
We examine the patterns of media ownership in 97 countries around the world. We find that almost universally the largest media firms are owned by the government or by private families. Government ownership is more pervasive in broadcasting than in the printed media. Government ownership of the media is generally associated with less press freedom, fewer political and economic rights, and, most conspicuously, inferior social outcomes in the areas of education and health. It does not appear that adverse consequences of government ownership of the media are restricted solely to the instances of government monopoly. 1 We thank Mei-Ling Lavecchia, Stefka Slavova, and especially Lihong Wang for excellent research assistance, and Tim Besley, Edward Glaeser, Simon Johnson, Lawrence Katz, Philip Keefer, Aart Kraay, Rafael La Porta, Mark Nelson, Russell Pittman, and Andrew Weiss for helpful comments. Roumeen Islam, Director of the World Development Report 2001, provided valuable input at all stages of the project. The collection of the data was organized and financed by the World Development Report 2001: Institutions for Markets. 2
Signal processing of sensor node data for vehicle detection
We describe an algorithm and experimental work for vehicle detection using sensor node data. Both acoustic and magnetic signals are processed for vehicle detection. We propose a real-time vehicle detection algorithm called the adaptive threshold algorithm (ATA). The algorithm first computes the time-domain energy distribution curve and then slices the energy curve using a threshold updated adaptively by some decision states. Finally, the hard decision results from threshold slicing are passed to a finite-state machine, which makes the final vehicle detection decision. Real-time tests and offline simulations both demonstrate that the proposed algorithm is effective.
Planar fiber packaging method for silicon photonic integrated circuits
A novel method for fiber packaging silicon waveguides is presented. The process uses angled fibers and capillary action of UV-cure epoxy. The technique is suited to passive alignment and can be scaled for fiber arrays.
Low-cost TSH (through-silicon hole) interposers for 3D IC integration
In this investigation, a SiP (system-in-package) which consists of a very low-cost interposer with through-silicon holes (TSHs) and with chips on its top- and bottom-side (a real 3D IC integration) is studied. Emphasis is placed on the fabrication of a test vehicle to demonstrate the feasibility of this SiP technology. The design, materials, and process of the top-chip, bottom-chip, TSH interposer, and final assembly will be presented. Shock and thermal cycling tests will be preformed to demonstrate the integrity of the SiP structure.
Impulse Noise Protection Initiatives in VDSL 2 Systems
In recent years, VDSL2 standard has been gaining popularity as a high speed network access technology to deliver triple play services of video, voice and data. These services require strict quality-of-experience (QoE) and quality-of-services (QoS) on DSL systems operating in an impulse noise environment. The DSL systems, in-turn, are affected severely in the presence of impulse noise in the telephone line. Therefore to improve upon the requirements of IPTV under the impulse noise conditions the standard body has been evaluating various proposals to mitigate and reduce the error rates. This paper lists and qualitatively compares various initiatives that have been suggested in the VDSL2 standard body to improve the protection of VDSL2 services against impulse noise.
A Convolutional Neural Network Approach for Post-Processing in HEVC Intra Coding
Lossy image and video compression algorithms yield visually annoying artifacts including blocking, blurring, and ringing, especially at low bit-rates. To reduce these artifacts, post-processing techniques have been extensively studied. Recently, inspired by the great success of convolutional neural network (CNN) in computer vision, some researches were performed on adopting CNN in post-processing, mostly for JPEG compressed images. In this paper, we present a CNN-based post-processing algorithm for High Efficiency Video Coding (HEVC), the state-of-theart video coding standard. We redesign a Variable-filter-size Residuelearning CNN (VRCNN) to improve the performance and to accelerate network training. Experimental results show that using our VRCNN as post-processing leads to on average 4.6% bit-rate reduction compared to HEVC baseline. The VRCNN outperforms previously studied networks in achieving higher bit-rate reduction, lower memory cost, and multiplied computational speedup.
HIV-AIDS, is not a Viral Disease; It is a Metabolic Syndrome
In the past 3 decades, we have been living with the hypothesis that, “HIV causes AIDS.” HIV, Human Immunodeficiency Virus is considered as the causative agent of AIDS, Acquired Immunodeficiency Syndrome; wherein the body's immune system gets damaged opening doors for major infections. However, the 3 Noble prize winner prestigious scientists including Luc Montagnier (discovered HIV), Kary Mullis (invented PCR test for HIV detection), and Wangari Maathai, (renowned African environmentalist), along with thousands of other scientists and intellects worked on the other side of the coin to prove back and again that the above said theory is a misconception. Last 35 years have provided immense literature and investigation evidences to firmly conclude that HIV is not the real cause of AIDS. On the other hand, malnutrition and metabolic syndrome due to drug abuse have been pointed as the real convicts. This review article revolves around the same theory for better understanding the HIV-AIDS hypothesis to be fallacious and search for valid causes.
Influence of processing-induced phase transformations on the dissolution of theophylline tablets
The object of this investigation was to evaluate the influence of (1) processing-induced decrease in drug crystallinity and (2) phase transformations during dissolution, on the per-formance of theophylline tablet formulations. Anhydrous theophylline underwent multiple transformations (anhydrate »hydrate»anhydrate) during processing. Although the crystallinity of the anhydrate obtained finally was lower than that of the unprocessed drug, it dissolved at a slower rate. This decrease in dissolution rate was attributed to the accelerated anhydrate to hydrate transformationduring the dissolution run. Water vapor sorption studies proved to be a good predictor of powder dissolution behavior. While a decrease in crystallinity was brought about either by milling or by granulation, the effect on tablet dissolution was pronounced only in the latter. Tablet formulations prepared from the granules exhibited higher hardness, longer disintegration time, and slower dissolution than those containing the milled drug. The granules underwent plastic deformation during compression resulting in harder tablets, with delayed disintegration. The high hardness coupled with rapid anhydrate»hydrate transformationduring dissolution resulted in the formation of a hydrate layer on the tablet surface, which further delayed tablet disintegration and, consequently, dissolution. Phase transformations during processing and, more importantly, during dissolution influenced the observed dissolution rates. Product performance was a complex function of the physical state of the active and the processing conditions.
Covalent Organic Frameworks: From Materials Design to Biomedical Application
Covalent organic frameworks (COFs) are newly emerged crystalline porous polymers with well-defined skeletons and nanopores mainly consisted of light-weight elements (H, B, C, N and O) linked by dynamic covalent bonds. Compared with conventional materials, COFs possess some unique and attractive features, such as large surface area, pre-designable pore geometry, excellent crystallinity, inherent adaptability and high flexibility in structural and functional design, thus exhibiting great potential for various applications. Especially, their large surface area and tunable porosity and π conjugation with unique photoelectric properties will enable COFs to serve as a promising platform for drug delivery, bioimaging, biosensing and theranostic applications. In this review, we trace the evolution of COFs in terms of linkages and highlight the important issues on synthetic method, structural design, morphological control and functionalization. And then we summarize the recent advances of COFs in the biomedical and pharmaceutical sectors and conclude with a discussion of the challenges and opportunities of COFs for biomedical purposes. Although currently still at its infancy stage, COFs as an innovative source have paved a new way to meet future challenges in human healthcare and disease theranostic.
Investigating the Agility Bias in DNS Graph Mining
The concept of agile domain name system (DNS) refers to dynamic and rapidly changing mappings between domain names and their Internet protocol (IP) addresses. This empirical paper evaluates the bias from this kind of agility for DNS-based graph theoretical data mining applications. By building on two conventional metrics for observing malicious DNS agility, the agility bias is observed by comparing bipartite DNS graphs to different subgraphs from which vertices and edges are removed according to two criteria. According to an empirical experiment with two longitudinal DNS datasets, irrespective of the criterion, the agility bias is observed to be severe particularly regarding the effect of outlying domains hosted and delivered via content delivery networks and cloud computing services. With these observations, the paper contributes to the research domains of cyber security and DNS mining. In a larger context of applied graph mining, the paper further elaborates the practical concerns related to the learning of large and dynamic bipartite graphs.
Systems of Systems Engineering: Basic Concepts, Model-Based Techniques, and Research Directions
The term “System of Systems” (SoS) has been used since the 1950s to describe systems that are composed of independent constituent systems, which act jointly towards a common goal through the synergism between them. Examples of SoS arise in areas such as power grid technology, transport, production, and military enterprises. SoS engineering is challenged by the independence, heterogeneity, evolution, and emergence properties found in SoS. This article focuses on the role of model-based techniques within the SoS engineering field. A review of existing attempts to define and classify SoS is used to identify several dimensions that characterise SoS applications. The SoS field is exemplified by a series of representative systems selected from the literature on SoS applications. Within the area of model-based techniques the survey specifically reviews the state of the art for SoS modelling, architectural description, simulation, verification, and testing. Finally, the identified dimensions of SoS characteristics are used to identify research challenges and future research areas of model-based SoS engineering.
A Voltage Scalable 0.26 V, 64 kb 8T SRAM With V$_{\min}$ Lowering Techniques and Deep Sleep Mode
A voltage scalable 0.26 V, 64 kb 8T SRAM with 512 cells per bitline is implemented in a 130 nm CMOS process. Utilization of the reverse short channel effect in a SRAM cell design improves cell write margin and read performance without the aid of peripheral circuits. A marginal bitline leakage compensation (MBLC) scheme compensates for the bitline leakage current which becomes comparable to a read current at subthreshold supply voltages. The MBLC allows us to lower Vmin to 0.26 V and also eliminates the need for precharged read bitlines. A floating read bitline and write bitline scheme reduces the leakage power consumption. A deep sleep mode minimizes the standby leakage power consumption without compromising the hold mode cell stability. Finally, an automatic wordline pulse width control circuit tracks PVT variations and shuts off the bitline leakage current upon completion of a read operation.
Sequential Deep Learning for Human Action Recognition
We propose in this paper a fully automated deep model, which learns to classify human actions without using any prior knowledge. The first step of our scheme, based on the extension of Convolutional Neural Networks to 3D, automatically learns spatio-temporal features. A Recurrent Neural Network is then trained to classify each sequence considering the temporal evolution of the learned features for each timestep. Experimental results on the KTH dataset show that the proposed approach outperforms existing deep models, and gives comparable results with the best related works.
Foot orthoses for plantar heel pain: a systematic review and meta-analysis.
OBJECTIVE To investigate the effectiveness of foot orthoses for pain and function in adults with plantar heel pain. DESIGN Systematic review and meta-analysis. The primary outcome was pain or function categorised by duration of follow-up as short (0 to 6 weeks), medium (7 to 12 weeks) or longer term (13 to 52 weeks). DATA SOURCES Medline, CINAHL, SPORTDiscus, Embase and the Cochrane Library from inception to June 2017. ELIGIBILITY CRITERIA FOR SELECTING STUDIES Studies must have used a randomised parallel-group design and evaluated foot orthoses for plantar heel pain. At least one outcome measure for pain or function must have been reported. RESULTS A total of 19 trials (1660 participants) were included. In the short term, there was very low-quality evidence that foot orthoses do not reduce pain or improve function. In the medium term, there was moderate-quality evidence that foot orthoses were more effective than sham foot orthoses at reducing pain (standardised mean difference -0.27 (-0.48 to -0.06)). There was no improvement in function in the medium term. In the longer term, there was very low-quality evidence that foot orthoses do not reduce pain or improve function. A comparison of customised and prefabricated foot orthoses showed no difference at any time point. CONCLUSION There is moderate-quality evidence that foot orthoses are effective at reducing pain in the medium term, however it is uncertain whether this is a clinically important change.
A New Sensorless Control Scheme for Brushless DC Motors without Phase Shift Circuit
This paper presents the design, analysis, and implementation of a high performance and cost effective sensorless control scheme for the extensively used brushless DC motors (BLDCMs). Taking into consideration cost and ease of implementation, the commutation signals are obtained without the motor neutral voltage, multistage analog filters, A/D converter, and the complex digital phase shift (delay) circuit which are indispensable in the conventional sensorless control algorithms. Instead of detecting the zero crossing point of the non-excited phase back EMF to the neutral voltage, the commutation signals are extracted directly from the specific average terminal voltages with simple RC circuits and comparators in this study. Since the neutral voltage is not needed, the extracted commutation signal is insensitive to the common mode noise, hence the low speed performance is superior to the conventional methods. Moreover, the complex phase shift circuit has been eliminated. As a result, the control algorithm can be easily interfaced with the cost effective commercial Hall effect sensor based commutation ICs. Because of the inherent low cost property, the proposed algorithm is particularly suitable for cost sensitive products such as air purifiers, air blowers, cooling fans, and related home appliances. Theoretical analysis and experimental results show that the proposed control algorithm exhibits superior performance over a wide operation range when compared with the conventional solutions
Functional electrical therapy for hemiparesis alleviates disability and enhances neuroplasticity.
Impaired motor and sensory function is common in the upper limb in humans after cerebrovascular stroke and it often remains as a permanent disability. Functional electrical stimulation therapy is known to enhance the motor function of the paretic hand; however, the mechanism of this enhancement is not known. We studied whether neural plasticity has a role in this therapy-induced enhancement of the hand motor function in 20 hemiparetic subjects with chronic stroke (age 53 ± 6 years; 7 females and 13 males; 10 with cerebral infarction and 10 with cerebral haemorrhage; and time since incident 2.4 ± 2.0 years). These subjects were randomized to functional electrical therapy or conventional physiotherapy group. Both groups received upper limb treatment (twice daily sessions) for two weeks. Behavioral hand motor function and neurophysiologic transcranial magnetic stimulation (TMS) tests were applied before and after the treatment and at 6-months follow-up. TMS is useful in assessing excitability changes in the primary motor cortex. Faster corticospinal conduction and newly found muscular responses were observed in the paretic upper limb in the functional electrical therapy group but not in the conventional therapy group after the intervention. Behaviourally, faster movement times were observed in the functional electrical therapy group but not in the conventionally treated group. Despite the small number of heterogeneous subjects, functional exercise augmented with individualized electrical therapy of the paretic upper limb may enhance neuroplasticity, observed as corticospinal facilitation, in chronic stroke subjects, along with moderate improvements in the voluntary motor control of the affected limb.
Rivastigmine transdermal patch skin tolerability: results of a 1-year clinical trial in patients with mild-to-moderate Alzheimer's disease.
BACKGROUND AND OBJECTIVES Transdermal patches provide non-invasive, continuous drug delivery, and offer significant potential advantages over oral treatments. With all transdermal treatments a proportion of patients will experience some form of skin reaction. The rivastigmine patch has been approved for the treatment of mild-to-moderate Alzheimer's disease (AD) since July 2007 in the US. The aim of the component of the trial reported here was to evaluate the skin tolerability of the rivastigmine transdermal patch in patients with mild-to-moderate AD. METHODS The pivotal IDEAL trial was a 24-week, randomized, double-blind, placebo-controlled, multicentre trial of the efficacy and tolerability of the rivastigmine transdermal patch in 1195 patients with mild-to-moderate AD. This was followed by a 28-week open-label extension. Although not prospectively defined as a secondary assessment, during both phases of the study the condition of the patients' skin at the application site was evaluated. These data are reviewed in this article. RESULTS During the 24-week, double-blind phase of the study, 89.6% of patients in the target 9.5 mg/24 h patch treatment group had recorded 'no, slight or mild' signs or symptoms for their most severe application-site reaction. Erythema and pruritus were the most commonly reported reactions. No patient in any patch treatment group experienced a skin reaction that was reported as a serious adverse event. In the 9.5 mg/24 h treatment group, 2.4% of patients discontinued treatment due to an application-site reaction. During the 28-week open-label extension, the skin tolerability profile was similar to that seen in the double-blind phase. Overall, 3.7% of patients discontinued treatment due to application-site skin reactions. There was no indication that the severity of the skin reactions increased over time. CONCLUSION Overall, the data support a favourable skin tolerability profile for the rivastigmine transdermal patch, and provide reassurance that the benefits of rivastigmine patch therapy for patients with AD are not confounded by significant skin irritation problems. Nevertheless, care should be taken to follow manufacturer's advice about patch application, such as daily rotation of the application site, to minimize the risk of skin reactions.
Examining perceptions of agility in software development practice
Introduction Organizations undertaking software development are often reminded that successful practice depends on a number of non-technical issues that are managerial, cultural and organizational in nature. These issues cover aspects from appropriate corporate structure, through software process development and standardization to effective collaborative practice. Since the articulation of the 'software crisis' in the late-1960s, significant effort has been put into addressing problems related to the cost, time and quality of software development via the application of systematic processes and management practices for software engineering. Early efforts resulted in prescriptive structured methods, which have evolved and expanded over time to embrace consortia/ company-led initiatives such as the Unified Modeling Language and the Unified Process alongside formal process improvement frameworks such as the International Standards Organization's 9000 series, the Capability Maturity Model and SPICE. More recently, the philosophy behind traditional plan-based initiatives has been questioned by the agile movement, which seeks to emphasize the human and craft aspects of software development over and above the engineering aspects. Agile practice is strongly collaborative in its outlook, favoring individuals and interactions over processes and tools, working software over comprehensive documentation, customer collaboration over contract negotiation, and responding to change over following a plan (see Sidebar 1). Early experience reports on the use of agile practice suggest some success in dealing with the problems of the software crisis, and suggest that plan-based and agile practice are not mutually exclusive. Indeed, flexibility may arise from this unlikely marriage in an aim to strike a balance between the rigor of traditional plan-based approaches and the need for adaptation of those to suit particular development situations. With this in mind, this article surveys the current practice in software engineering alongside perceptions of senior development managers in relation to agile practice in order to understand the principles of agility that may be practiced implicitly and their effects on plan-based approach.
Visual-Interactive Preprocessing of Time Series Data
Time series data is an important data type in many different application scenarios. Consequently, there are a great variety of approaches for analyzing time series data. Within these approaches different strategies for cleaning, segmenting, representing, normalizing, comparing, and aggregating time series data can be found. When combining these operations, the time series analysis preprocessing workflow has many degrees of freedom. To define an appropriate preprocessing pipeline, the knowledge of experts coming from the application domain has to be included into the design process. Unfortunately, these experts often cannot estimate the effects of the chosen preprocessing algorithms and their parameterizations on the time series. We introduce a system for the visual-interactive exploitation of the preprocessing parameter space. In contrast to ‘black box’-driven approaches designed by computer scientists based on the requirements of domain experts, our system allows these experts to visual-interactively compose time series preprocessing pipelines by themselves. Visual support is provided to choose the right order and parameterization of the preprocessing steps. We demonstrate the usability of our approach with a case study from the digital library domain, in which time-oriented scientific research data has to be preprocessed to realize a visual search and analysis application.
Continuously Learning Neural Dialogue Management
We describe a two-step approach for dialogue management in task-oriented spoken dialogue systems. A unified neural network framework is proposed to enable the system to first learn by supervision from a set of dialogue data and then continuously improve its behaviour via reinforcement learning, all using gradientbased algorithms on one single model. The experiments demonstrate the supervised model’s effectiveness in the corpus-based evaluation, with user simulation, and with paid human subjects. The use of reinforcement learning further improves the model’s performance in both interactive settings, especially under higher-noise conditions.
Ledipasvir and sofosbuvir in patients with genotype 1 hepatitis C virus infection and compensated cirrhosis: An integrated safety and efficacy analysis.
UNLABELLED Patients with hepatitis C virus (HCV) infection and cirrhosis are underrepresented in clinical trials of interferon-free regimens of direct-acting antiviral agents, making it difficult to optimize therapy. We performed a post-hoc analysis of data from seven clinical trials to evaluate the efficacy and safety of the fixed-dose combination of ledipasvir (LDV) and sofosbuvir (SOF), with and without ribavirin (RBV), in 513 treatment-naïve and previously treated patients with genotype 1 HCV and compensated cirrhosis. All patients received LDV-SOF for 12 or 24 weeks with or without RBV. We determined the rates of sustained virological response (SVR) 12 weeks after treatment (SVR12) overall and for subgroups. Of the 513 patients analyzed, 69% were previously treated and 47% had failed previous treatment with a protease-inhibitor regimen. Overall, 493 patients (96%; 95% confidence interval [CI]: 94%-98%) achieved SVR12, 98% of treatment-naïve and 95% of previously treated patients. SVR12 rates did not vary greatly by treatment duration (95% of patients receiving 12 weeks and 98% of patients receiving 24 weeks of treatment), nor by addition of RBV (95% of patients receiving LDV-SOF alone and 97% of those who received LDV-SOF plus RBV), although previously treated patients receiving 12 weeks of LDV-SOF without RBV had an SVR12 rate of 90%. One patient discontinued LDV-SOF because of an adverse event (AE). The most common AEs were headache (23%), fatigue (16%-19%), and asthenia (14%-16%). One patient (<1%) of those receiving LDV-SOF alone, and 4 (2%) of those receiving LDV-SOF plus RBV had treatment-related serious AEs. CONCLUSIONS This analysis suggests that 12 weeks of LDV-SOF is safe and effective for treatment-naïve patients with HCV genotype 1 and compensated cirrhosis. The relatively lower SVR in treatment-experienced patients treated with 12 weeks of LDV-SOF raises the question of whether these patients would benefit from adding RBV or extending treatment duration to 24 weeks.
Beyond Bitcoin: A Critical Look at Blockchain-Based Systems
After more than eight years since the launch of Bitcoin, the decentralized transaction ledger functionality implemented through the blockchain technology is being used not only for cryptocurrencies, but to register, confirm and transfer any kind of contract and property. In this work, we analyze the most relevant functionalities and known issues of this technology, with the intent of pointing out the possible behaviours that are not as efficient and reliable as they should be when thinking with a broader outlook.
Neural network regularization and ensembling using multi-objective evolutionary algorithms
Regularization is an essential technique to improve generalization of neural networks. Traditionally, regularization is conducted by including an additional term in the cost function of a learning algorithm. One main drawback of these regularization techniques is that a hyperparameter that determines to which extension the regularization influences the learning algorithm must be determined beforehand. This paper addresses the neural network regularization problem from a multi-objective optimization point of view. During the optimization, both structure and parameters of the neural network will be optimized. A slightly modified version of two multi-objective optimization algorithms, the dynamic weighted aggregation (DWA) method and the elitist non-dominated sorting genetic algorithm (NSGA-II) are used and compared. An evolutionary multi-objective approach to neural network regularization has a number of advantages compared to the traditional methods. First, a number of models with a spectrum of model complexity can be obtained in one optimization run instead of only one single solution. Second, an efficient new regularization term can be introduced, which is not applicable to gradient-based learning algorithms. As a natural by-product of the multi-objective optimization approach to neural network regularization, neural network ensembles can be easily constructed using the obtained networks with different levels of model complexity. Thus, the model complexity of the ensemble can be adjusted by adjusting the weight of each member network in the ensemble. Simulations are carried out on a test function to illustrate the feasibility of the proposed ideas.
French maritime pine bark extract Pycnogenol dose-dependently lowers glucose in type 2 diabetic patients.
P ycnogenol, a standardized extract from the bark of the French maritime pine, consists of phenolic compounds including catechin, taxifolin, procyanidins, and phenolic acids (1). We investigated whether Pycnog-enol has a glucose-lowering effect because of personal verbal communication from patients reporting no need for insulin following supplementation with Pycnogenol. The study was designed as an open, controlled, dose-finding study and was approved by the ethical committee of Guangnamen Hospital. Patients gave written informed consent. We recruited 18 men and 12 women among outpatients of the Guangnamen Hospital and Municipal Dental Hospital. Patients were 28 – 64 years of age and had a BMI 22–34 kg/m 2. Patients with type 2 diabetes were included with fasting plasma glucose between 7 and 10 mmol/l after participation in a diet and sports program for 1 month. Exclusion criteria were type 1 diabetes, manifest or malignant hypertension and any diseases requiring continuous treatment with drugs, and pregnant or lactat-ing women. During the first and last visit, a physical examination and assessment of demographic data, medical history, body weight, height, vital signs, blood pressure , electrocardiogram, diet, and medication was carried out. Samples for fasting blood glucose, HbA 1c , insulin, and endo-thelin-1 were taken. Blood samples were taken to measure postprandial blood glucose 2 h after breakfast. Glucose was measured enzymatically, HbA 1c by high-performance liquid chromatography , and insulin and endothe-lin-1 by immunoassays. Statistical analysis was done with SPSS 16.0 software using one-factorial ANOVA with Fisher projected least significant difference test. Patients received in succession 50, 100, 200, and 300 mg Pycnogenol in intervals of 3 weeks. Every 3 weeks, fasting and postprandial glucose, endothe-lin-1, HbA 1c , and insulin were analyzed. No changes were observed in vital signs, electroencephalogram, or blood pressure over the 12-week period. Fasting blood glucose was lowered dose dependently until a dose of 200 mg Pycnogenol was administered. Increasing the dose from 200 to 300 mg did not further decrease blood glucose. Compared with baseline, 100 –300 mg lowered fasting glucose significantly from 8.64 Ϯ 0.93 to 7.54 Ϯ 1.64 mmol/l (P Ͻ 0.05). Fifty milligrams of Pycnogenol lowered postprandial glucose significantly from 12.47 Ϯ 1.06 to 11.16 Ϯ 2.11 mmol/l (P Ͻ 0.05). Maximum decrease of post-prandial glucose was observed with 200 mg to 10.07 Ϯ 2.69 mmol/l; 300 mg had no stronger effect. HbA 1c levels decreased continuously from 8.02 Ϯ 1.04 to 7.37 Ϯ 1.09%. …
Cardiorespiratory fitness changes in patients receiving comprehensive outpatient cardiac rehabilitation in the UK: a multicentre study.
To cite: Sandercock GRH, Cardoso F, Almodhy M, et al. Heart 2013;99: 785–790. ABSTRACT Background Exercise training is a key component of cardiac rehabilitation but there is a discrepancy between the high volume of exercise prescribed in trials comprising the evidence base and the lower volume prescribed to UK patients. Objective To quantify prescribed exercise volume and changes in cardiorespiratory fitness in UK cardiac rehabilitation patients. Methods We accessed n=950 patients who completed cardiac rehabilitation at four UK centres and extracted clinical data and details of cardiorespiratory fitness testing preand post-rehabilitation. We calculated mean and effect size (d) for change in fitness at each centre and converted values to metabolic equivalent (METs). We calculated a fixed-effects estimate of change in fitness expressed as METs and d. Results Patients completed 6 to 16 (median 8) supervised exercise sessions. Effect sizes for changes in fitness were d=0.34–0.99 in test-specific raw units and d=0.34–0.96 expressed as METs. The pooled fixed effect estimate for change in fitness was 0.52 METs (95% CI 0.51 to 0.53); or an effect size of d=0.59 (95% CI 0.58 to 0.60). Conclusion Gains in fitness varied by centre and fitness assessment protocol but the overall increase in fitness (0.52 METs) was only a third the mean estimate reported in a recent systematic review (1.55 METs). The starkest difference in clinical practice in the UK centres we sampled and the trials which comprise the evidencebase for cardiac rehabilitation was the small volume of exercise completed by UK patients. The exercise training volume prescribed was also only a third that reported in most international studies. If representative of UK services, these low training volumes and small increases in cardiorespiratory fitness may partially explain the reported inefficacy of UK cardiac rehabilitation to reduce patient mortality and morbidity.
Community Detection and Mining in Social Media
The chapter 1 on Social Media and Social Computing has documented the nature and characteristics of social networks and community detection. The explanation about the emerging of social networks and their properties constitute this chapter followed by a discussion on social community. The nodes, ties and influence in the social networks are the core of the discussion in the second chapter. Centrality is the core discussion here and the degree of centrality and its measure is explained. Understanding network topology is required for social networks concepts.
Plasma homocysteine predicts mortality independently of traditional risk factors and C-reactive protein in patients with angiographically defined coronary artery disease.
BACKGROUND Plasma homocysteine (tHCY) has been associated with coronary artery disease (CAD). We tested whether tHCY also increases secondary risk, after initial CAD diagnosis, and whether it is independent of traditional risk factors, C-reactive protein (CRP), and methylenetetrahydrofolate reductase (MTHFR) genotype. METHODS AND RESULTS Blood samples were collected from 1412 patients with severe angiographically defined CAD (stenosis >/=70%). Plasma tHCY was measured by fluorescence polarization immunoassay. The study cohort was evaluated for survival after a mean of 3.0+/-1.0 years of follow-up (minimum 1.5 years, maximum 5.0 years). The average age of the patients was 65+/-11 years, 77% were males, and 166 died during follow-up. Mortality was greater in patients with tHCY in tertile 3 than in tertiles 1 and 2 (mortality 15.7% versus 9.6%, P:=0.001 [log-rank test], hazard ratio [HR] 1.63). The relative hazard increased 16% for each 5-micromol/L increase in tHCY (P:<0.001). In multivariate Cox regression analysis, controlling for univariate clinical and laboratory predictors, elevated tHCY remained predictive of mortality (HR 1.64, P:=0.009), together with age (HR 1. 72 per 10-year increment, P:<0.0001), ejection fraction (HR 0.84 per 10% increment, P:=0.0001), diabetes (HR 1.98, P:=0.001), CRP (HR 1. 42 per tertile, P:=0.004), and hyperlipidemia. Homozygosity for the MTHFR variant was weakly predictive of tHCY levels but not mortality. CONCLUSIONS In patients with angiographically defined CAD, tHCY is a significant predictor of mortality, independent of traditional risk factors, CRP, and MTHFR genotype. These findings increase interest in tHCY as a secondary risk marker and in secondary prevention trials (ie, with folate/B vitamins) to determine whether reduction in tHCY will reduce risk.
MAPGEN: Mixed-Initiative Planning and Scheduling for the Mars Exploration Rover Mission
8 1094-7167/04/$20.00 © 2004 IEEE IEEE INTELLIGENT SYSTEMS to elucidate the planet’s past climate, water activity, and habitability. Science is MER’s primary driver, so making best use of the scientific instruments, within the available resources, is a crucial aspect of the mission. To address this criticality, the MER project team selected MAPGEN (Mixed Initiative Activity Plan Generator) as an activity-planning tool. MAPGEN combines two existing systems, each with a strong heritage: the APGEN activity-planning tool1 from the Jet Propulsion Laboratory and the Europa planning and scheduling system2 from NASA Ames Research Center. This article discusses the issues arising from combining these tools in this mission’s context.
An in-pipe robot with underactuated parallelogram crawler modules
In this paper, we present a new in-pipe robot with independent underactuated parallelogram crawler modules, which can automatically overcome inner obstacles in the pipes. The parallelogram crawler modules are adopted to maintain the anterior-posterior symmetry of forward and backward movements, and a simple differential mechanism based on a pair of spur gears is installed to provide underactuated mechanisms. A central base unit connects each crawler module through foldable pantograph mechanisms. To verify the basic behavior of this robot, primary experiments in pipes with different diameters and at partial steps were conducted.
Antibacterial activities of some plant extracts used in Indian traditional folk medicine
Resistance of microbes to available antimicrobial agents is a major global public health problem. Infective diseases account for approximately one half of all deaths in tropics. Plants have been an integral part of human society since the beginning of civilization. India is rich in its plant diversity. A number of plants have been documented for their medicinal potential, which are used by the traditional healers, herbals folklorists in Indian systems of medicine namely Ayurveda, Unani, Siddha apart from homeopathy and electropathy. These plant species play major roles in the health care of the nation’s population. Different national and international pharmaceutical companies are utilizing such plant-based formulations in treatment of various diseases and disorders around the world[1-7]. Cestrum diurnum (C. diurnum) L. (Solanaceae) is a shrub that is also known as day jasmine. There are several applications of the plant that have been well documented in several literatures and the toxicity of the species to humans and livestock has been frequently reported[8,9]. The leaves contain a calcinogenic glycoside called 1, 25dihydroxycholocalciferol that leads to a vitamin D toxicity and elevated serum Ca and deposition of calcium in soft tissues[10]. Ocimum sanctum (O. sanctum) L. (Lamiaceae), commonly known as tulsi, is a tropical, much branched, annual herb. Apart from the religious significance, it also has substantial medicinal meanings and is used in Ayurvedic treatment. Tulsi is reported to be anti-inflammatory due to the presence of eugenol oil in its leaves. It is useful in curing respiratory tract infections. The urosolic acid present in tulsi has anti-allergic properties. The plant can play a role in the management of immunological disorders such as allergies and asthma. The juice of the leaves is used against fever and as an antidote for snake and scorpion bites. It’s antispasmodic properties can relieve abdominal pains and help in lowering the blood sugar level[11]. Carica papaya (C. papaya) L. (Caricaceae), commonly known as papaya, is a small, soft-wooded, fast growing, short lived laticiferous tree upto 8.0 m in height with a straight cylindric stem bearing characteristics i.e. leaf scars throughout and with a tuft of leaves at the top; leaves deeply lobed, palm-like with characteristically long. The ARTICLE INFO ABSTRACT
Cultural differences in the online behavior of consumers
Understanding how different cultures use the Net---as well as perceive the same Web sites---can translate to truly global e-commerce.
Alternatives to Incomplete Colonoscopy.
A thorough and complete colonoscopy is critically important in preventing colorectal cancer. Factors associated with difficult and incomplete colonoscopy include a poor bowel preparation, severe diverticulosis, redundant colon, looping, adhesions, young and female patients, patient discomfort, and the expertise of the endoscopist. For difficult colonoscopy, focusing on bowel preparation techniques, appropriate sedation and adjunct techniques such as water immersion, abdominal pressure techniques, and patient positioning can overcome many of these challenges. Occasionally, these fail and other alternatives to incomplete colonoscopy have to be considered. If patients have low risk of polyps, then noninvasive imaging options such as computed tomography (CT) or magnetic resonance (MR) colonography can be considered. Novel applications such as Colon Capsule™ and Check-Cap are also emerging. In patients in whom a clinically significant lesion is noted on a noninvasive imaging test or if they are at a higher risk of having polyps, balloon-assisted colonoscopy can be performed with either a single- or double-balloon enteroscope or colonoscope. The application of these techniques enables complete colonoscopic examination in the vast majority of patients.
Linear lesion cryoablation for the treatment of atrioventricular nodal re-entry tachycardia in pediatrics and young adults.
BACKGROUND Radiofrequency (RF) ablation is a relatively safe and effective method for treatment of atrioventricular nodal re-entry tachycardia (AVNRT), but carries a 1-2% risk of AV nodal injury. Cryothermal ablation reduces the risk of AV block, but has had decreased procedural success and increased recurrence of tachycardia. We sought to evaluate the technique of linear lesion cryoablation (LLC) for treatment of AVNRT. METHODS Single institution retrospective cohort study. Each patient underwent slow pathway modification using either RF, single lesion cryoablation, or LLC. Procedural success, recurrence, freedom from tachycardia 12 months following ablation and fluoroscopy time were compared between ablation methods. RESULTS A total of 125 patients, median age 15.5 (4.7-23.1) years, underwent ablation: 32 RF energy, 31 single lesion cryoablation, 62 LLC. Procedural success was obtained in 94% of the LLC group compared to 58% using single lesion cryoablation (P ≤ 0.001). Ninety-seven percent of the LLC group was free from tachycardia recurrence, significantly higher than with single lesion cryoablation (68%, P = 0.001) and equal to that of RF (97%, P = NS). Fluoroscopy time was reduced in the LLC group compared to both single lesion and RF groups (P = 0.02). There was no permanent AV nodal injury in the cryoablation groups. CONCLUSION LLC is an effective means of treatment for AVNRT and is associated with significantly improved procedural success and freedom from recurrence compared to single lesion methods, while at the same time obtaining equivalent efficacy to RF.
GSMem: Data Exfiltration from Air-Gapped Computers over GSM Frequencies
Air-gapped networks are isolated, separated both logically and physically from public networks. Although the feasibility of invading such systems has been demonstrated in recent years, exfiltration of data from air-gapped networks is still a challenging task. In this paper we present GSMem, a malware that can exfiltrate data through an air-gap over cellular frequencies. Rogue software on an infected target computer modulates and transmits electromagnetic signals at cellular frequencies by invoking specific memory-related instructions and utilizing the multichannel memory architecture to amplify the transmission. Furthermore, we show that the transmitted signals can be received and demodulated by a rootkit placed in the baseband firmware of a nearby cellular phone. We present crucial design issues such as signal generation and reception, data modulation, and transmission detection. We implement a prototype of GSMem consisting of a transmitter and a receiver and evaluate its performance and limitations. Our current results demonstrate its efficacy and feasibility, achieving an effective transmission distance of 1 5.5 meters with a standard mobile phone. When using a dedicated, yet affordable hardware receiver, the effective distance reached over 30 meters.
Distributed Control Applications: Guidelines, Design Patterns, and Application Examples with the IEC 61499 [Book News]
This is an edited book with 21 chapters contributed by who created the MDA discipline. The book is divided into four parts: IEC 61499 Basics; Design Guidelines and Application Development; Industrial Application Examples; and finally, Laboratory Automation Examples. The editors did a great job with this volume, whichmay open the eyes of many who, until recently, doubted the usefulness of the IEC 61499 reference model-based design. It is a must for anyone interested or involved in engineering design.
A Single-Phase Photovoltaic Inverter Topology With a Series-Connected Energy Buffer
Module integrated converters (MICs) have been under rapid development for single-phase grid-tied photovoltaic applications. The capacitive energy storage implementation for the double-line-frequency power variation represents a differentiating factor among existing designs. This paper introduces a new topology that places the energy storage block in a series-connected path with the line interface block. This design provides independent control over the capacitor voltage, soft-switching for all semiconductor devices, and the full four-quadrant operation with the grid. The proposed approach is analyzed and experimentally demonstrated.
Characterization of meticillin-resistant Staphylococcus aureus isolates from hospitals in KwaZulu-Natal province, Republic of South Africa.
Epidemiological data based on phenotypic and molecular characterization of meticillin-resistant Staphylococcus aureus (MRSA) in sub-Saharan Africa are limited. This investigation studied 61 MRSA isolates obtained from 13 health-care institutions in KwaZulu-Natal (KZN) province, South Africa, from March 2001 to August 2003. More than 80 % of the isolates were resistant to at least four classes of antibiotics and six isolates were resistant to the aminoglycoside, macrolide-lincosamide and tetracycline groups of antibiotics, heavy metals and nucleic acid-binding compounds. PFGE of SmaI-digested genomic DNA revealed seven types, designated A-G. Type A was the main pulsotype (62.3 %) and was identified in 11 of the 13 health-care institutions, suggesting that it represented a major clone in health-care institutions in KZN province. Analysis of representative members of the three major pulsotypes by spa, multilocus sequence typing and SCCmec typing revealed the types t064-ST1173-SCCmec IV and t064-ST1338-SCCmec IV (PFGE type A, single-locus and double-locus variants of ST8), t037-ST239-SCCmec III (PFGE type F) and t045-ST5-SCCmec III (PFGE type G). The combination of various typing methods provided useful information on the geographical dissemination of MRSA clones in health-care institutions in KZN province. The observation of major clones circulating in health-care facilities in KZN province indicates that adequate infection control measures are urgently needed.
Pitfalls in pediatric radiology
This essay depicts some of the diagnostic errors identified in a large academic pediatric imaging department during a 13-year period. Our aim is to illustrate potential situations in which errors are more likely to occur and more likely to cause harm, and to share our difficult cases so other radiologists might learn without having to experience those situations themselves.
CodeMend: Assisting Interactive Programming with Bimodal Embedding
Software APIs often contain too many methods and parameters for developers to memorize or navigate effectively. Instead, developers resort to finding answers through online search engines and systems such as Stack Overflow. However, the process of finding and integrating a working solution is often very time-consuming. Though code search engines have increased in quality, there remain significant language- and workflow-gaps in meeting end-user needs. Novice and intermediate programmers often lack the language to query, and the expertise in transferring found code to their task. To address this problem, we present CodeMend, a system to support finding and integration of code. CodeMend leverages a neural embedding model to jointly model natural language and code as mined from large Web and code datasets. We also demonstrate a novel, mixed-initiative, interface to support query and integration steps. Through CodeMend, end-users describe their goal in natural language. The system makes salient the relevant API functions, the lines in the end-user's program that should be changed, as well as proposing the actual change. We demonstrate the utility and accuracy of CodeMend through lab and simulation studies.
Colchicine semisynthetics: chemotherapeutics for cancer?
Nitrogen-containing bioactive alkaloids of plant origin play a significant role in human health and medicine. Several semisynthetic antimitotic alkaloids are successful in anticancer drug development. Gloriosa superba biosynthesizes substantial quantities of colchicine, a bioactive molecule for gout treatment. Colchicine also has antimitotic activity, preventing growth of cancer cells by interacting with microtubules, which could lead to the design of better cancer therapeutics. Further, several colchicine semisynthetics are less toxic than colchicine. Research is being conducted on effective, less toxic colchicine semisynthetic formulations with potential drug delivery strategies directly targeting multiple solid cancers. This article reviews the dynamic state of anticancer drug development from colchicine semisynthetics and natural colchicine production and briefly discusses colchicine biosynthesis.
A Phase II trial of autologous stem cell transplantation followed by mini-allogeneic stem cell transplantation for the treatment of multiple myeloma: an analysis of Eastern Cooperative Oncology Group ECOG E4A98 and E1A97.
Conventional allogeneic hematopoietic stem cell transplantation (HSCT) for multiple myeloma is associated with high transplantation-related mortality (TRM). Nonmyeloablative allogeneic transplantation (NST) uses the well-known graft-versus-myeloma (GVM) effect to eradicate minimal residual disease. The Eastern Cooperative Oncology Group conducted a Phase II trial of autologous HSCT followed by NST to provide maximal tumor cytoreduction to allow for a subsequent GVM effect. Patients received melphalan 200 mg/m(2) with autologous HSCT, followed by fludarabine 30 mg/m(2) in 5 daily doses and cyclophosphamide 1 g/m(2) in 2 daily doses with matched sibling donor NST. Graft-versus-host disease (GVHD) prophylaxis included cyclosporine and corticosteroids. The primary endpoints were TRM, graft failure, acute GVHD, progression-free survival (PFS), and overall survival (OS). Thirty-two patients were enrolled into the study; 23 patients completed both transplantations (72%). Best responses post-NST were 7 (30%) complete remission (CR), 11 (48%) partial remission (PR), 2 (9%) no response, and 3 (13%) not evaluable. Acute grade III-IV GVHD was observed in 4 patients (17%), and chronic GVHD was seen in 13 patients (57%; 7 limited, 6 extensive). Chronic GVHD resulted in the following responses: 3 (23%) CR, 1 continuing CR, and 6 (46%) PR. Two patients (8.7%) had early TRM. With a median follow up of 4.6 years, the median PFS was 3.6 years, and the 2-year OS was 78%. Our findings indicate that autologous HSCT followed by NST is feasible, with a low early TRM in a cooperative group setting. The overall response rate was 78%, including 30% CR, similar to other reports for autologous HSCT-NST. Because a plateau in PFS or OS was not observed with this treatment approach even in patients achieving CR, we suggest that future studies use posttransplantation maintenance therapy.
Safety and factors predicting the duration of first and second treatment interruptions guided by CD4+ cell counts in patients with chronic HIV infection.
OBJECTIVES To evaluate the safety of treatment interruption (TI) guided by CD4+ count in HIV-infected patients followed-up prospectively. METHODS Patients on HAART with a CD4+ cell count >500 cells/mm3 discontinued therapy with instructions to start therapy again before their CD4+ count dropped below 200 cells/mm3. RESULTS We report data on 112 HIV-infected patients. The median follow-up after starting the first TI was 34.7 months (IQR: 23.1-43.8). The median duration of the first TI was 12 months (IQR: 5.2-25). In the multivariate analysis the factor which most strongly correlated with the duration of the first TI was the CD4+ cell count at the end of the TI. Among the 34 patients who had completed a second TI, the duration of the two periods of interruption was similar if the treatment was recommenced at the end of the first TI at a CD4+ count higher than the nadir count. CONCLUSIONS The strategy of TI is safe if the criteria for restarting therapy are applied correctly. The factor with the greatest influence on the duration of the first TI is the number of CD4+ cells at the end of the TI.
RCS Reduction With a Dual Polarized Self-Complementary Connected Array Antenna
In this paper, a wideband dual polarized self-complementary connected array antenna with low radar cross section (RCS) under normal and oblique incidence is presented. First, an analytical model of the multilayer structure is proposed in order to obtain a fast and reliable predimensioning tool providing an optimized design of the infinite array. The accuracy of this model is demonstrated thanks to comparative simulations with a full wave analysis software. RCS reduction compared to a perfectly conducting flat plate of at least 10 dB has been obtained over an ultrawide bandwidth of nearly 7:1 at normal incidence and 5:1 (3.8 to 19 GHz) at 60° in both polarizations. These performances are confirmed by finite element tearing and interconnecting computations of finite arrays of different sizes. Finally, the realization of a $28 \times 28$ cell prototype and measurement results are detailed.
Self-regulation: Employing a Generative Adversarial Network to Improve Event Detection
Due to the ability of encoding and mapping semantic information into a highdimensional latent feature space, neural networks have been successfully used for detecting events to a certain extent. However, such a feature space can be easily contaminated by spurious features inherent in event detection. In this paper, we propose a self-regulated learning approach by utilizing a generative adversarial network to generate spurious features. On the basis, we employ a recurrent network to eliminate the fakes. Detailed experiments on the ACE 2005 and TAC-KBP 2015 corpora show that our proposed method is highly effective and adaptable.
Jointly Predicting Predicates and Arguments in Neural Semantic Role Labeling
Recent BIO-tagging-based neural semantic role labeling models are very high performing, but assume gold predicates as part of the input and cannot incorporate span-level features. We propose an endto-end approach for jointly predicting all predicates, arguments spans, and the relations between them. The model makes independent decisions about what relationship, if any, holds between every possible word-span pair, and learns contextualized span representations that provide rich, shared input features for each decision. Experiments demonstrate that this approach sets a new state of the art on PropBank SRL without gold predicates.1
2 . Dilated Residual Networks
Convolutional networks for image classification progressively reduce resolution until the image is represented by tiny feature maps in which the spatial structure of the scene is no longer discernible. Such loss of spatial acuity can limit image classification accuracy and complicate the transfer of the model to downstream applications that require detailed scene understanding. These problems can be alleviated by dilation, which increases the resolution of output feature maps without reducing the receptive field of individual neurons. We show that dilated residual networks (DRNs) outperform their non-dilated counterparts in image classification without increasing the model’s depth or complexity. We then study gridding artifacts introduced by dilation, develop an approach to removing these artifacts (‘degridding’), and show that this further increases the performance of DRNs. In addition, we show that the accuracy advantage of DRNs is further magnified in downstream applications such as object localization and semantic segmentation.
The Logic of Animal Conflict
Conflicts between animals of the same species usually are of “limited war” type, not causing serious injury. This is often explained as due to group or species selection for behaviour benefiting the species rather than individuals. Game theory and computer simulation analyses show, however, that a “limited war” strategy benefits individual animals as well as the species.
Chemotherapy for mobilisation of Ph-negative progenitor cells from patients with CML: impact of different mobilisation regimens
Mobilised peripheral blood stem cells are widely used for autografting in patients with chronic myeloid leukaemia (CML) and it is generally thought that a high proportion of Ph-negative progenitor cells in the graft is desirable. We report here the results of 91 stem cell mobilisations performed with various chemotherapy regimens followed by G-CSF. We show that mobilisation of Ph-negative cells is possible after diagnosis as well as in advanced stages of the disease. The yield of Ph-negative cells is highly dependent on the chemotherapy regimen: while the combination of idarubicin and cytarabin for 3–5 days (IC3–5) mobilised Ph-negative cells in most patients, high-dose cyclophosphamide was ineffective. Mobilisation of Ph-negative progenitor cells after IC3 was at least as effective as after IC5; however, less apheresis sessions were required, and toxicity was much reduced after IC3. Compared to historical controls, IC was equally effective as the widely used ICE/minilCE (idarubicin, cytarabin, etoposide) protocol. No correlation was found between graft quality and the cytogenetic response to subsequent treatment with interferon-α. We conclude that IC3 is an effective and well-tolerated regimen for mobilising Ph-negative cells that compares well with more aggressive approaches such as IC5 and ICE/minilCE. Bone Marrow Transplantation (2001) 27, 1125–1132.
Molecular simulations and critical pH studies for the interactions between 2-phosphonobutane-1,2,4-tricarboxylic acid and calcite surfaces in circular cooling water systems
AbstractThe adsorption of 2-phosphonobutane-1,2,4-tricarboxylic acid (PBTC) on calcite surfaces (), (1 0 2), (1 0 4), (1 1 3), (2 0 2) was studied by molecular simulation. The phosphoric acid and carboxylic acid functional groups energetically interacted with the faces and preferentially occupied the carbonate ion sites by chemisorption, which is in agreement with the critical pH experiments. The strength of adsorption followed the order of () > (1 1 3) > (1 0 2) > (2 0 2) > (1 0 4). The binding energy gradually decreased with increasing temperature. The relationship between the critical pH and the adsorbed PBTC2− configuration indicates that the adsorbed inhibitor configuration plays an important role in inhibitor efficiency.
Systems, experts, and computers: the systems approach in management and engineering, world war ii and after [Review]
The Reviews Department includes reviews of publications, films, audio and videotapes, and exhibits relating to the history of computing. Full length studies of technical, economic, business, social and institutional aspects or other works of interest to Annals readers are briefly noted, with appropriate bibliographic information. Colleagues are encouraged to recommend works they wish to review and to suggest titles to the Reviews Editor.
MODAClouds: A model-driven approach for the design and execution of applications on multiple Clouds
Cloud computing is emerging as a major trend in the ICT industry. While most of the attention of the research community is focused on considering the perspective of the Cloud providers, offering mechanisms to support scaling of resources and interoperability and federation between Clouds, the perspective of developers and operators willing to choose the Cloud without being strictly bound to a specific solution is mostly neglected. We argue that Model-Driven Development can be helpful in this context as it would allow developers to design software systems in a cloud-agnostic way and to be supported by model transformation techniques into the process of instantiating the system into specific, possibly, multiple Clouds. The MODAClouds (MOdel-Driven Approach for the design and execution of applications on multiple Clouds) approach we present here is based on these principles and aims at supporting system developers and operators in exploiting multiple Clouds for the same system and in migrating (part of) their systems from Cloud to Cloud as needed. MODAClouds offers a quality-driven design, development and operation method and features a Decision Support System to enable risk analysis for the selection of Cloud providers and for the evaluation of the Cloud adoption impact on internal business processes. Furthermore, MODAClouds offers a run-time environment for observing the system under execution and for enabling a feedback loop with the design environment. This allows system developers to react to performance fluctuations and to re-deploy applications on different Clouds on the long term.
Adaptive One-Class Support Vector Machine
In this correspondence, we derive an online adaptive one-class support vector machine. The machine structure is updated via growing and pruning mechanisms and the weights are updated using structural risk minimization principles underlying support vector machines. Our approach leads to very compact machines compared to other online kernel methods whose size, unless truncated, grows almost linearly with the number of observed patterns. The proposed method is online in the sense that every pattern is only presented once to the machine and there is no need to store past samples and adaptive in the sense that it can forget past input patterns and adapt to the new characteristics of the incoming data. Thus, the characterizing properties of our algorithm are compactness, adaptiveness and real-time processing capabilities, making it especially well-suited to solve online novelty detection problems. Regarding algorithm performance, we have carried out experiments in a time series segmentation problem, obtaining favorable results in both accuracy and model complexity with respect to two existing state-of-the-art methods.
Frequency control and wind turbine technologies
Increasing levels of wind generation has resulted in an urgent need for the assessment of their impact on frequency control of power systems. Whereas increased system inertia is intrinsically linked to the addition of synchronous generation to power systems, due to differing electromechanical characteristics, this inherent link is not present in wind turbine generators. Regardless of wind turbine technology, the displacement of conventional generation with wind will result in increased rates of change of system frequency. The magnitude of the frequency excursion following a loss of generation may also increase. Amendment of reserve policies or modification of wind turbine inertial response characteristics may be necessary to facilitate increased levels of wind generation. This is particularly true in small isolated power systems.
Troglitazone decreases serum uric acid concentrations in Type II diabetic patients and non-diabetics
Dear Sir, Hyperuricaemia is commonly seen in association with obesity, impaired glucose tolerance, hypertension and dyslipidaemia [1, 2], which are the central characteristics of the insulin resistance syndrome. The association between isolated hyperuricaemia and insulin resistance is, however, still not defined [3]. To assess whether or not hyperuricaemia is an expression of insulin resistance, it seems reasonable to measure alterations in serum uric acid concentrations during treatment with an insulin sensitising agent in patients with Type II (non-insulin-dependent) diabetes mellitus. With this aim, we recruited 95 (61 men) Type II diabetic patients who were given 200 mg of troglitazone twice a day and met the following criteria: no use of drugs that affect uric acid metabolism, no increase in serum creatinine concentration ( < 115 mmol/l) and availability of paired determinations of serum uric acid concentrations before and after troglitazone treatment. Their age (mean ± SD) was 61.1 ± 10.3 years and BMI was 25.4 ± 3.0. All medications other than troglitazone (if any) were unchanged throughout the study period of 9.7 ± 7.6 months. All available data from 6 months before troglitazone and all data after it were combined respectively and used for statistical analysis by Student's paired t-test. Serum uric acid concentrations decreased significantly after treatment with troglitazone from a mean baseline value of 327 ± 71 mmol/l to 298 ± 71 mmol/l (p < 0.0001): 351 ± 71 to 321 ± 65 in men, 286 ± 54 to 244 ± 48 in women (both p < 0.0001) (Fig. 1). This decrease occurred concomitantly with those in HbA1c values (8.4 ± 1.3 to 7.8 ± 1.3 %, p < 0.0001) and triglyceride concentrations (2.2 ± 1.3 to 1.9 ± 1.2 mmol/l, p < 0.01) and in a subset of patients (n = 34) in parallel with fasting plasma glucose (10.5 ± 2.1 to 8.3 ± 2.4 mmol/l, p < 0.0001) and insulin concentrations (52.2 ± 22.2 to 34.8 ± 15.0 pmol/l, p < 0.0001) and therefore also HOMA-IR, a simple measure of insulin resistance obtained by fasting plasma concentration of glucose (mmol/l) ́ insulin (mU/ml)/22.5 (4.3 ± 2.4 to 2.2 ± 1.2, p < 0.0001). Furthermore, in six non-diabetic subjects (two healthy volunteers and four hyperuricaemic patients) who consented to participate in a troglitazone trial (200 mg/day for 4 weeks), serum uric acid decreased similarly (375 ± 60 to 298 ± 30 mmol/l, p < 0.02). A 24-h urinary excretion of uric acid (3.8 ± 1.1 to 3.3 ± 0.8 mmol/day), uric acid clearance (9.0 ± 4.0 to 8.3 ± 3.3 ml/min), and the ratio of uric acid clearance to that of creatinine (10.0 ± 4.3 to 11.0 ± 5.0 %) determined in 12 Type II diabetic patients did not differ significantly between pre-treatment and post-treatment. It is therefore difficult to explain the decrease in serum uric acid concentrations given previous reports that hyperinsulinaemia reduces renal clearance of uric acid through coupling with increased sodium reabsorption at proximal tubules [4±7]. Haemodilution seems unlikely, because haematocrit values did not alter greatly with the treatment (data not shown). In addition, improvement of glycaemic control is not the cause because urinary excretion of uric acid is facilitated by hyperglycaemia itself and vice versa. Although the mechanism is not clear, we speculate that troglitazone could have some effect on uric acid production because increased purine biosynthesis and turnover with its attendant increase in serum uric acid concentrations due to increased activity of the hexose monophosphate shunt can be conceptually linked to insulin resistance or hyperinsulinaemia or both. Our results vary from another study [8] which showed no statistically significant alteration in serum uric acid concentrations or serum triglyceride, total-cholesterol and HDL-cholesterol, plasminogen activator inhibitor-1, conventional variables of insulin resistance, after troglitazone treatment (200 mg/day) in 25 Type II diabetic patients for 8 weeks using a randomised double-blind procedure. The reason for this disparity is not clear but is presumably due to differences in the dosage used in both studies: theirs was half of ours. It is not clear from this in vivo study whether the hypouricaemic effect of troglitazone is a direct pharmacological action or an indirect one secondary to reduction of insulin resistance. We favour the latter, however, because biochemical variables for insulin resistance changed concomitantly with serum uric acid concentrations in our study but the lack of changes in these variables was not accompanied by alterations in serum uric acid concentrations in the other study [8]. Such an effect has not been reported in the other thiazolidinediones, pioglitazone and rosiglitazone. We showed an antihyperuricaemic effect of troglitazone through a mechanism distinct from increased renal clearDiabetologia (2000) 43: 814±820
Fundamentals of Digital Forensic Evidence
Digital forensic evidence consists of exhibits, each consisting of a sequence of bits, presented by witnesses in a legal matter, to help jurors establish the facts of the case and support or refute legal theories of the case. The exhibits should be introduced and presented and/or challenged by properly qualified people using a properly applied methodology that addresses the legal theories at issue. The tie between technical issues associated with the digital forensic evidence and the legal theories is the job of expert witnesses.
Nonlinear model predictive control for aerial manipulation
This paper presents a nonlinear model predictive controller to follow desired 3D trajectories with the end effector of an unmanned aerial manipulator (i.e., a multirotor with a serial arm attached). To the knowledge of the authors, this is the first time that such controller runs online and on board a limited computational unit to drive a kinematically augmented aerial vehicle. Besides the trajectory following target, we explore the possibility of accomplishing other tasks during flight by taking advantage of the system redundancy. We define several tasks designed for aerial manipulators and show in simulation case studies how they can be achieved by either a weighting strategy, within a main optimization process, or a hierarchical approach consisting on nested optimizations. Moreover, experiments are presented to demonstrate the performance of such controller in a real robot.
Evaluation of ten oral fluid point-of-collection drug-testing devices.
Previously, the laboratory evaluations of six point-of-collection oral fluid (POC-OF) drug testing devices were reported. Four additional devices, Oralstat (American Bio Medica); SmartClip (Envitec); Impact (LifePoint); and OraLine IV s.a.t (Sun Biomedical Laboratories), were recently evaluated for their ability to meet the claimed (and proposed) cutoff concentrations set by the manufacturers for the detection of amphetamine(s), cocaine/metabolite, opiates, and cannabinoids (Oralstat also benzodiazepines). With the exception of the Sun Biomedical device, actual false-positive results were not encountered. Most devices performed well for the detection of opiates and amphetamine(s), but approximately half had amphetamine(s) cutoff concentrations greater than that proposed by the Substance Abuse and Mental Health Services Administration (SAMHSA). Only three devices had cocaine cutoffs less than or equal to 20 ng/mL (SAMHSA), and a number of false-negative results were obtained. The devices still were not capable of detecting Delta(9)-tetrahydrocannabinol at 4 ng/mL (SAMHSA). However, sensitivities improved since the initial studies, and approximately half of the devices met the THC-COOH cutoff proposed by SAMHSA. Results from the current and previous evaluations are presented in the paper and indicate that the sensitivity and performance of commercial OF drug testing devices is improving, but remains problematic for the reliable detection of cannabinoid use.
uMine: A Blockchain Based on Human Miners
Blockchain technology like Bitcoin is a rapidly growing field of research which has found a wide array of applications. However, the power consumption of the mining process in the Bitcoin blockchain alone is estimated to be at least as high as the electricity consumption of Ireland which constitutes a serious liability to the widespread adoption of blockchain technology. We propose a novel instantiation of a proof of human-work which is a cryptographic proof that an amount of human work has been exercised, and show its use in the mining process of a blockchain. Next to our instantiation there is only one other instantiation known which relies on indistinguishability obfuscation, a cryptographic primitive whose existence is only conjectured. In contrast, our construction is based on the cryptographic principle of multiparty computation (which we use in a black box manner) and thus is the first known feasible proof of human-work scheme. Our blockchain mining algorithm called uMine, can be regarded as an alternative energy-efficient approach to mining.
Why Do We Like the iPhone? The Role of Evaluative Conditioning in Attitude Formation
In this article, we address how attitudes are acquired. We present evaluative conditioning (EC) as an explanation for attitude formation and attitude change. EC refers to changes in liking due to pairings of affectively meaningful and neutral stimuli. We discuss four different theoretical accounts of EC and outline current issues and avenues for future research.
Digital Librarian, Cybrarian, or Librarian with Specialized Skills: Who Will Staff Digital Libraries?
Linda Marion is a doctoral student at Drexel University. E-mail: [email protected]. Abstract This exploratory study examined 250 online academic librarian employment ads posted during 2000 to determine current requirements for technologically oriented jobs. A content analysis software program was used to categorize the specific skills and characteristics listed in the ads. The results were analyzed using multivariate analysis (cluster analysis and multidimensional scaling). The results, displayed in a three-dimensional concept map, indicate 19 categories comprised of both computer related skills and behavioral characteristics that can be interpreted along three continua: (1) technical skills to people skills; (2) long-established technologies and behaviors to emerging trends; (3) technical service competencies to public service competencies. There was no identifiable “digital librarian” category.
Convolutional Neural Networks for Steady Flow Approximation
In aerodynamics related design, analysis and optimization problems, flow fields are simulated using computational fluid dynamics (CFD) solvers. However, CFD simulation is usually a computationally expensive, memory demanding and time consuming iterative process. These drawbacks of CFD limit opportunities for design space exploration and forbid interactive design. We propose a general and flexible approximation model for real-time prediction of non-uniform steady laminar flow in a 2D or 3D domain based on convolutional neural networks (CNNs). We explored alternatives for the geometry representation and the network architecture of CNNs. We show that convolutional neural networks can estimate the velocity field two orders of magnitude faster than a GPU-accelerated CFD solver and four orders of magnitude faster than a CPU-based CFD solver at a cost of a low error rate. This approach can provide immediate feedback for real-time design iterations at the early stage of design. Compared with existing approximation models in the aerodynamics domain, CNNs enable an efficient estimation for the entire velocity field. Furthermore, designers and engineers can directly apply the CNN approximation model in their design space exploration algorithms without training extra lower-dimensional surrogate models.
The Challenge of Big Data in Public Health: An Opportunity for Visual Analytics
Public health (PH) data can generally be characterized as big data. The efficient and effective use of this data determines the extent to which PH stakeholders can sufficiently address societal health concerns as they engage in a variety of work activities. As stakeholders interact with data, they engage in various cognitive activities such as analytical reasoning, decision-making, interpreting, and problem solving. Performing these activities with big data is a challenge for the unaided mind as stakeholders encounter obstacles relating to the data's volume, variety, velocity, and veracity. Such being the case, computer-based information tools are needed to support PH stakeholders. Unfortunately, while existing computational tools are beneficial in addressing certain work activities, they fall short in supporting cognitive activities that involve working with large, heterogeneous, and complex bodies of data. This paper presents visual analytics (VA) tools, a nascent category of computational tools that integrate data analytics with interactive visualizations, to facilitate the performance of cognitive activities involving big data. Historically, PH has lagged behind other sectors in embracing new computational technology. In this paper, we discuss the role that VA tools can play in addressing the challenges presented by big data. In doing so, we demonstrate the potential benefit of incorporating VA tools into PH practice, in addition to highlighting the need for further systematic and focused research.
Decentralized decision support for intelligent manufacturing in Industry 4.0
“Industry 4.0” is recognized as the future of industrial production in which concepts as Smart Factory and Decentralized Decision Making are fundamental. This paper proposes a novel strategy to support decentralized decision, whilst identifying opportunities and challenges of Industry 4.0 contextualizing the potential that represents industrial digitalization and how technological advances can contribute for a new perspective on manufacturing production. It is analysed a set of barriers to the full implementation of Industry 4.0 vision, identifying areas in which decision support is vital. Then, for each of the identified areas, the authors propose a strategy, characterizing it together with the level of complexity that is involved in the different processes. The strategies proposed are derived from the needs of two of Industry 4.0 main characteristics: horizontal integration and vertical integration. For each case, decision approaches are proposed concerning the type of decision required (strategic, tactical, operational and real-time). Validation results are provided together with a discussion on the main challenges that might be an obstacle for a successful decision strategy.
Simulation-based medical education: time for a pedagogical shift.
The purpose of medical education at all levels is to prepare physicians with the knowledge and comprehensive skills, required to deliver safe and effective patient care. The traditional 'apprentice' learning model in medical education is undergoing a pedagogical shift to a 'simulation-based' learning model. Experiential learning, deliberate practice and the ability to provide immediate feedback are the primary advantages of simulation-based medical education. It is an effective way to develop new skills, identify knowledge gaps, reduce medical errors, and maintain infrequently used clinical skills even among experienced clinical teams, with the overall goal of improving patient care. Although simulation cannot replace clinical exposure as a form of experiential learning, it promotes learning without compromising patient safety. This new paradigm shift is revolutionizing medical education in the Western world. It is time that the developing countries embrace this new pedagogical shift.
BenchNN: On the broad potential application scope of hardware neural network accelerators
Recent technology trends have indicated that, although device sizes will continue to scale as they have in the past, supply voltage scaling has ended. As a result, future chips can no longer rely on simply increasing the operational core count to improve performance without surpassing a reasonable power budget. Alternatively, allocating die area towards accelerators targeting an application, or an application domain, appears quite promising, and this paper makes an argument for a neural network hardware accelerator. After being hyped in the 1990s, then fading away for almost two decades, there is a surge of interest in hardware neural networks because of their energy and fault-tolerance properties. At the same time, the emergence of high-performance applications like Recognition, Mining, and Synthesis (RMS) suggest that the potential application scope of a hardware neural network accelerator would be broad. In this paper, we want to highlight that a hardware neural network accelerator is indeed compatible with many of the emerging high-performance workloads, currently accepted as benchmarks for high-performance micro-architectures. For that purpose, we develop and evaluate software neural network implementations of 5 (out of 12) RMS applications from the PARSEC Benchmark Suite. Our results show that neural network implementations can achieve competitive results, with respect to application-specific quality metrics, on these 5 RMS applications.
Enforcing constraints for interpolation and extrapolation in Generative Adversarial Networks
Generative Adversarial Networks (GANs) are becoming popular choices for unsupervised learning. At the same time there is a concerted effort in the machine learning community to expand the range of tasks in which learning can be applied as well as to utilize methods from other disciplines to accelerate learning. With this in mind, in the current work we suggest ways to enforce given constraints in the output of a GAN both for interpolation and extrapolation. The two cases need to be treated differently. For the case of interpolation, the incorporation of constraints is built into the training of the GAN. The incorporation of the constraints respects the primary gametheoretic setup of a GAN so it can be combined with existing algorithms. However, it can exacerbate the problem of instability during training that is well-known for GANs. We suggest adding small noise to the constraints as a simple remedy that has performed well in our numerical experiments. The case of extrapolation (prediction) is more involved. First, we employ a modified interpolation training process that uses noisy data but does not necessarily enforce the constraints during training. Second, the resulting modified interpolator is used for extrapolation where the constraints are enforced after each step through projection on the space of constraints.
Tear proteins of normal young Hong Kong Chinese
Background: Analysis of tear proteins is of diagnostic value for abnormal ocular conditions such as dry eye syndrome. Many studies of tear proteins have been performed on Caucasian subjects. However, little is known about these proteins in Chinese eyes. Methods: The total tear protein concentrations of 30 normal young Hong Kong Chinese were determined by the Bradford method and the modified Lowry method. Bovine serum albumin (BSA) and bovine immunoglobulin G (IgG) were both used as standards for each method. The tear protein patterns were determined by sodium dodecyl sulfate–polyacrylamide gel electrophoresis (SDS-PAGE), and the concentrations of major tear proteins were quantified by scanning densitometry after SDS-PAGE. Results: The mean±SD total tear protein concentrations determined by the Bradford method, using BSA and IgG as standards, were 6.05±1.58 mg/ml and 11.48± 2.32 mg/ml respectively. The values determined by the modified Lowry method, using the same two standards, were 9.66±2.03 mg/ml and 7.53±1.80 mg/ml respectively. The mean±SD concentrations of major tear proteins were 2.73±0.82 mg/ml for lactoferrin, 0.021±0.028 mg/ml for human serum albumin, 2.89± 0.88 mg/ml for tear-specific prealbumin and 2.46±0.44 mg/ml for lysozyme. Conclusion: The results of total tear protein concentrations indicated that values obtained from different methods and different standards were not comparable. The tear protein patterns of our subjects were qualitatively similar to those reported for Caucasian subjects. However, the concentrations of the major proteins of our subjects were not in accordance with those reported previously. The main reason may be the large variability of method used.
A Method for MPPT Control While Searching for Parameters Corresponding to Weather Conditions for PV Generation Systems
This paper describes a method for maximum power point tracking (MPPT) control while searching for optimal parameters corresponding to weather conditions at that time. The conventional method has problems in that it is impossible to quickly acquire the generation power at the maximum power (MP) point in low solar radiation (irradiation) regions. It is found theoretically and experimentally that the maximum output power and the optimal current, which give this maximum, have a linear relation at a constant temperature. Furthermore, it is also shown that linearity exists between the short-circuit current and the optimal current. MPPT control rules are created based on the findings from solar arrays that can respond at high speeds to variations in irradiation. The proposed MPPT control method sets the output current track on the line that gives the relation between the MP and the optimal current so as to acquire the MP that can be generated at that time by dividing the power and current characteristics into two fields. The method is based on the generated power being a binary function of the output current. Considering the experimental fact that linearity is maintained only at low irradiation below half the maximum irradiation, the proportionality coefficient (voltage coefficient) is compensated for only in regions with more than half the rated optimal current, which correspond to the maximum irradiation. At high irradiation, the voltage coefficient needed to perform the proposed MPPT control is acquired through the hill-climbing method. The effectiveness of the proposed method is verified through experiments under various weather conditions
Sex affects immunity.
Sex based differences in immune responses, affecting both the innate and adaptive immune responses, contribute to differences in the pathogenesis of infectious diseases in males and females, the response to viral vaccines and the prevalence of autoimmune diseases. Indeed, females have a lower burden of bacterial, viral and parasitic infections, most evident during their reproductive years. Conversely, females have a higher prevalence of a number of autoimmune diseases, including Sjogren's syndrome, systemic lupus erythematosus (SLE), scleroderma, rheumatoid arthritis (RA) and multiple sclerosis (MS). These observations suggest that gonadal hormones may have a role in this sex differential. The fundamental differences in the immune systems of males and females are attributed not only to differences in sex hormones, but are related to X chromosome gene contributions and the effects of environmental factors. A comprehensive understanding of the role that sex plays in the immune response is required for therapeutic intervention strategies against infections and the development of appropriate and effective therapies for autoimmune diseases for both males and females. This review will focus on the differences between male and female immune responses in terms of innate and adaptive immunity, and the effects of sex hormones in SLE, MS and RA.
Cellulosomes: bacterial nanomachines for dismantling plant polysaccharides
Cellulosomes are multienzyme complexes that are produced by anaerobic cellulolytic bacteria for the degradation of lignocellulosic biomass. They comprise a complex of scaffoldin, which is the structural subunit, and various enzymatic subunits. The intersubunit interactions in these multienzyme complexes are mediated by cohesin and dockerin modules. Cellulosome-producing bacteria have been isolated from a large variety of environments, which reflects their prevalence and the importance of this microbial enzymatic strategy. In a given species, cellulosomes exhibit intrinsic heterogeneity, and between species there is a broad diversity in the composition and configuration of cellulosomes. With the development of modern technologies, such as genomics and proteomics, the full protein content of cellulosomes and their expression levels can now be assessed and the regulatory mechanisms identified. Owing to their highly efficient organization and hydrolytic activity, cellulosomes hold immense potential for application in the degradation of biomass and are the focus of much effort to engineer an ideal microorganism for the conversion of lignocellulose to valuable products, such as biofuels.
Improved license plate detection using HOG-based features and genetic algorithm
In this paper, a new improved plate detection method which uses genetic algorithm (GA) is proposed. GA randomly scans an input image using a fixed detection window repeatedly, until a region with the highest evaluation score is obtained. The performance of the genetic algorithm is evaluated based on the area coverage of pixels in an input image. It was found that the GA can cover up to 90% of the input image in just less than an average of 50 iterations using 30×130 detection window size, with 20 population members per iteration. Furthermore, the algorithm was tested on a database that contains 1537 car images. Out of these images, more than 98% of the plates were successfully detected.
Teachers' pedagogical beliefs and their use of digital media in classrooms: Sharpening the focus of the 'will, skill, tool' model and integrating teachers' constructivist orientations
The ‘will, skill, tool’ model is a well-established theoretical framework that elucidates the conditions under which teachers are most likely to employ information and communication technologies (ICT) in the classroom. Past studies have shown that these three factors explain a very high degree of variance in the frequency of classroom ICT use. The present study replicates past findings using a different set of measures and hones in on possible subfactors. Furthermore, the study examines teacher affiliation for constructivist-style teaching, which is often considered to facilitate the pedagogical use of digital media. The study’s survey of 357 Swiss secondary school teachers reveals significant positive correlations between will, skill, and tool variables and the combined frequency and diversity of technology use in teaching. A multiple linear regression model was used to identify relevant subfactors. Five factors account for a total of 60% of the explained variance in the intensity of classroom ICT use. Computer and Internet applications are more often used by teachers in the classroom when: (1) teachers consider themselves to be more competent in using ICT for teaching; (2) more computers are readily available; (3) the teacher is a form teacher and responsible for the class; (4) the teacher is more convinced that computers improve student learning; and (5) the teacher more often employs constructivist forms of teaching and learning. The impact of constructivist teaching was small, however. 2012 Elsevier Ltd. All rights reserved.
NAIS: Neural Attentive Item Similarity Model for Recommendation
Item-to-item collaborative filtering (aka.item-based CF) has been long used for building recommender systems in industrial settings, owing to its interpretability and efficiency in real-time personalization. It builds a user's profile as her historically interacted items, recommending new items that are similar to the user's profile. As such, the key to an item-based CF method is in the estimation of item similarities. Early approaches use statistical measures such as cosine similarity and Pearson coefficient to estimate item similarities, which are less accurate since they lack tailored optimization for the recommendation task. In recent years, several works attempt to learn item similarities from data, by expressing the similarity as an underlying model and estimating model parameters by optimizing a recommendation-aware objective function. While extensive efforts have been made to use shallow linear models for learning item similarities, there has been relatively less work exploring nonlinear neural network models for item-based CF. In this work, we propose a neural network model named Neural Attentive Item Similarity model (NAIS) for item-based CF. The key to our design of NAIS is an attention network, which is capable of distinguishing which historical items in a user profile are more important for a prediction. Compared to the state-of-the-art item-based CF method Factored Item Similarity Model (FISM) [1] , our NAIS has stronger representation power with only a few additional parameters brought by the attention network. Extensive experiments on two public benchmarks demonstrate the effectiveness of NAIS. This work is the first attempt that designs neural network models for item-based CF, opening up new research possibilities for future developments of neural recommender systems.
A Stochastic Approximation Method
Let M(x) denote the expected value at level x of the response to a certain experiment. M(x) is assumed to be a monotone function of x but is unknown tot he experiment, and it is desire to find the solution x=0 of the equation M(x) = a, where x is a given constant. we give a method for making successive experiments at levels x1, x2,... in such a way that x, will tend to 0 in probability.
Compact Crossed-Dipole Antennas Loaded With Near-Field Resonant Parasitic Elements
Two compact planar crossed-dipole antennas loaded with near-field resonant parasitic (NFRP) elements are reported. The NFRP and crossed-dipole elements are designed for the desired circularly polarized (CP) radiation. By placing the NFRP element over the driven element at angles of 0° and 45°, respectively, dual-band and broadband CP antennas are realized. All radiating elements of antennas are 35 mm <inline-formula> <tex-math notation="LaTeX">$\times35$ </tex-math></inline-formula> mm <inline-formula> <tex-math notation="LaTeX">$\times0.508$ </tex-math></inline-formula> mm (<inline-formula> <tex-math notation="LaTeX">$0.187~\lambda _{0} \times 0.187 \lambda _{0} \times 0.0027 \lambda _{0}$ </tex-math></inline-formula> at 1.6 GHz) in size. The dual-band CP antenna has a measured <inline-formula> <tex-math notation="LaTeX">$\vert S_{11}\vert < -10$ </tex-math></inline-formula>-dB bandwidth of 226 MHz (1.473–1.699 GHz) and measured 3-dB axial ratio (AR) bandwidths of 12 MHz (1.530–1.542 GHz) and 35 MHz (1.580–1.615 GHz) with minimum AR CP frequencies of 1.535 GHz (AR = 0.26 dB) and 1.595 GHz (AR = 2.08 dB), respectively. The broadband CP antenna has a measured <inline-formula> <tex-math notation="LaTeX">$\vert S_{11}\vert < -10$ </tex-math></inline-formula>-dB bandwidth of 218 MHz (1.491–1.709 GHz) and a 3-dB AR bandwidth of 145 MHz (1.490–1.635 GHz). These compact antennas yield bidirectional electromagnetic fields with high radiation efficiency across their operational bandwidths.
Aligning everyday life priorities with people’s self-management support networks: an exploration of the work and implementation of a needs-led telephone support system
BACKGROUND Recent initiatives to target the personal, social and clinical needs of people with long-term health conditions have had limited impact within primary care. Evidence of the importance of social networks to support people with long-term conditions points to the need for self-management approaches which align personal circumstances with valued activities. The Patient-Led Assessment for Network Support (PLANS) intervention is a needs-led assessment for patients to prioritise their health and social needs and provide access to local community services and activities. Exploring the work and practices of patients and telephone workers are important for understanding and evaluating the workability and implementation of new interventions. METHODS Qualitative methods (interviews, focus group, observations) were used to explore the experience of PLANS from the perspectives of participants and the telephone support workers who delivered it (as part of an RCT) and the reasons why the intervention worked or not. Normalisation Process Theory (NPT) was used as a sensitising tool to evaluate: the relevance of PLANS to patients (coherence); the processes of engagement (cognitive participation); the work done for PLANS to happen (collective action); the perceived benefits and costs of PLANS (reflexive monitoring). 20 patients in the intervention arm of a clinical trial were interviewed and their telephone support calls were recorded and a focus group with 3 telephone support workers was conducted. RESULTS Analysis of the interviews, support calls and focus group identified three themes in relation to the delivery and experience of PLANS. These are: formulation of 'health' in the context of everyday life; trajectories and tipping points: disrupting everyday routines; precarious trust in networks. The relevance of these themes are considered using NPT constructs in terms of the work that is entailed in engaging with PLANS, taking action, and who is implicated this process. CONCLUSIONS PLANS gives scope to align long-term condition management to everyday life priorities and valued aspects of life. This approach can improve engagement with health-relevant practices by situating them within everyday contexts. This has potential to increase utilisation of local resources with potential cost-saving benefits for the NHS. TRIAL REGISTRATION ISRCTN45433299.
Community Detection and Classification in Hierarchical Stochastic Blockmodels
In disciplines as diverse as social network analysis and neuroscience, many large graphs are believed to be composed of loosely connected smaller graph primitives, whose structure is more amenable to analysis We propose a robust, scalable, integrated methodology for community detection and community comparison in graphs. In our procedure, we first embed a graph into an appropriate Euclidean space to obtain a low-dimensional representation, and then cluster the vertices into communities. We next employ nonparametric graph inference techniques to identify structural similarity among these communities. These two steps are then applied recursively on the communities, allowing us to detect more fine-grained structure. We describe a hierarchical stochastic blockmodel—namely, a stochastic blockmodel with a natural hierarchical structure—and establish conditions under which our algorithm yields consistent estimates of model parameters and motifs, which we define to be stochastically similar groups of subgraphs. Finally, we demonstrate the effectiveness of our algorithm in both simulated and real data. Specifically, we address the problem of locating similar sub-communities in a partially reconstructed Drosophila connectome and in the social network Friendster.
The importance of gesture in children's spatial reasoning.
On average, men outperform women on mental rotation tasks. Even boys as young as 4 1/2 perform better than girls on simplified spatial transformation tasks. The goal of our study was to explore ways of improving 5-year-olds' performance on a spatial transformation task and to examine the strategies children use to solve this task. We found that boys performed better than girls before training and that both boys and girls improved with training, whether they were given explicit instruction or just practice. Regardless of training condition, the more children gestured about moving the pieces when asked to explain how they solved the spatial transformation task, the better they performed on the task, with boys gesturing about movement significantly more (and performing better) than girls. Gesture thus provides useful information about children's spatial strategies, raising the possibility that gesture training may be particularly effective in improving children's mental rotation skills.
A survey on image-based rendering - representation, sampling and compression
Image-based rendering (IBR) has attracted a lot of research interest recently. In this paper, we survey the various techniques developed for IBR, including representation, sampling and compression. The goal is to provide an overview of research for IBR in a complete and systematic manner. We observe that essentially all the IBR representations are derived from the plenoptic function, which is seven dimensional and difficult to handle. We classify various IBR representations into two categories based on how the plenoptic function is simplified, namely restraining the viewing space and introducing source descriptions. In the former category, we summarize six common assumptions that were often made in various approaches and discuss how the dimension of the plenoptic function can be reduced based on these assumptions. In the latter category, we further categorize the methods based on what kind of source description was introduced, such as scene geometry, texture map or reflection model. Sampling and compression are also discussed respectively for both categories. r 2003 Elsevier B.V. All rights reserved.
Microstrip patch antenna array at 3.8 GHz for WiMax and UAV applications
This paper presents the design of a rectangular microstrip line-fed patch antenna array with a centre frequency of 3.8 GHz for WiMAX and Unmanned Air Vehicle (UAV) applications. A single element, 1×2 and 2×2 microstrip rectangular patch antennas were designed and simulated in Computer Simulation Tool (CST) Microwave Studio environment. The results of designed antennas were compared in terms of Return Loss (S11 parameters), bandwidth, directivity, gain and radiation pattern. Compared to traditional microstrip antennas the proposed array structure achieved a gain and directivity of 13.2 dB and 13.5 dBi respectively. The antenna was fabricated using Rogers Duroid RT-5880 substrate with a dielectric constant er of 2.2 and a thickness of 1.574 mm respectively. The array antennas were measured in the laboratory using Vector Network Analyser (VNA) and the results show good agreement with the array antenna simulation.
Agile virtualized infrastructure to proactively defend against cyber attacks
DDoS attacks have been a persistent threat to network availability for many years. Most of the existing mitigation techniques attempt to protect against DDoS by filtering out attack traffic. However, as critical network resources are usually static, adversaries are able to bypass filtering by sending stealthy low traffic from large number of bots that mimic benign traffic behavior. Sophisticated stealthy attacks on critical links can cause a devastating effect such as partitioning domains and networks. In this paper, we propose to defend against DDoS attacks by proactively changing the footprint of critical resources in an unpredictable fashion to invalidate an adversary's knowledge and plan of attack against critical network resources. Our present approach employs virtual networks (VNs) to dynamically reallocate network resources using VN placement and offers constant VN migration to new resources. Our approach has two components: (1) a correct-by-construction VN migration planning that significantly increases the uncertainty about critical links of multiple VNs while preserving the VN placement properties, and (2) an efficient VN migration mechanism that identifies the appropriate configuration sequence to enable node migration while maintaining the network integrity (e.g., avoiding session disconnection). We formulate and implement this framework using SMT logic. We also demonstrate the effectiveness of our implemented framework on both PlanetLab and Mininet-based experimentations.
Verification based ECG biometrics with cardiac irregular conditions using heartbeat level and segment level information fusion
We propose an ECG based robust human verification system for both healthy and cardiac irregular conditions using the heartbeat level and segment level information fusion. At the heartbeat level, we first propose a novel beat normalization and outlier removal algorithm after peak detection to extract normalized representative beats. Then after principal component analysis (PCA), we apply linear discriminant analysis (LDA) and within-class covariance normalization (WCCN) for beat variability compensation followed by cosine similarity and Snorm as scoring. At the segment level, we adopt the hierarchical Dirichlet process auto-regressive hidden Markov model (HDP-AR-HMM) in the Bayesian non-parametric framework for unsupervised joint segmentation and clustering without any peak detection. It automatically decodes each raw signal into a string vector. We then apply n-gram language model and hypothesis testing for scoring. Combining the aforementioned two subsystems together further improved the performance and outperformed the PCA baseline by 25% relatively on the PTB database.
Hypothesis spaces for minimum Bayes risk training in large vocabulary speech recognition
The Minimum Bayes Risk (MBR) framework has been a successful strategy for the training of hidden Markov models for large vocabulary speech recognition. Practical implementations of MBR must select an appropriate hypothesis space and loss function. The set of word sequences and a word-based Levenshtein distance may be assumed to be the optimal choice but use of phoneme-based criteria appears to be more successful. This paper compares the use of different hypothesis spaces and loss functions defined using the system constituents of word, phone, physical triphone, physical state and physical mixture component. For practical reasons the competing hypotheses are constrained by sampling. The impact of the sampling technique on the performance of MBR training is also examined.
Fine-grained attention for image caption generation
Despite the progress, generating natural language descriptions for images is still a challenging task. Most state-of-the-art methods for solving this problem apply existing deep convolutional neural network (CNN) models to extract a visual representation of the entire image, based on which the parallel structures between images and sentences are exploited using recurrent neural networks. However, there is an inherent drawback that their models may attend to a partial view of a visual element or a conglomeration of several concepts. In this paper, we present a fine-grained attention based model based on a deep recurrent architecture that combines recent advances in computer vision and machine translation. The model contains three sub-networks: a deep recurrent neural network for sentences, a deep convolutional network for images, and a region proposal network for nearly cost-free region proposals. Our model is able to automatically learn to fix its gaze on salient region proposals. The process of generating the next word, given the previously generated ones, is aligned with this visual perception experience. We validate the effectiveness of the proposed model on three benchmark datasets (Flickr 8K, Flickr 30K and MS COCO). The experimental results confirm the effectiveness of the proposed system.
A Hypermedia Version Control Framework
The areas of application of hypermedia technology, combined with the capabilities that hypermedia provides for manipulating structure, create an environment in which version control is very important. A hypermedia version control framework has been designed to specifically address the version control problem in open hypermedia environments. One of the primary distinctions of the framework is the partitioning of hypermedia version control functionality into intrinsic and application-specific categories. The version control has been used as a model for the design of version control services for a hyperbase management system that provides complete version support for both data and structural entities. In addition to serving as a version control model for open hypermedia environments, the framework offers a clarifying and unifying context in which to examine the issues of version control in hypermedia.