title
stringlengths
8
300
abstract
stringlengths
0
10k
Biogenic synthesis of silver nanoparticles using Nicotiana tobaccum leaf extract and study of their antibacterial effect
Department of Environmental Biotechnology, Ashok & Rita Patel Institute of Integrated Study and Research in Biotechnology and Allied Sciences, New Vallabh vidyanagar, Anand, Gujarat, 388121, India. 2 Amity institute of microbial technology, Amity University, Sector 125, Noida, U.P., 201303, India. 3 Nano and Computational Materials Lab, Catalysis Division, National Chemical Laboratory, Council of Scientific and Industrial Research, Pune 411008, India.
A 10-bit CMOS DAC With Current Interpolated Gamma Correction for LCD Source Drivers
This paper presents a compact 10-bit digital-to-analog converter (DAC) for LCD source drivers. The cyclic DAC architecture is used to reduce the area of LCD column drivers when compared to the use of conventional resistor-string DACs. The current interpolation technique is proposed to perform gamma correction after D/A conversion. The gamma correction circuit is shared by four DAC channels using the interleave technique. A prototype 10-bit DAC with gamma correction function is implemented in 0.35 μm CMOS technology and its average die size per channel is 0.053 mm2, which is smaller than those of the R-DACs with gamma correction function. The settling time of the 10-bit DAC is 1 μs, and the maximum INL and DNL are 2.13 least significant bit (LSB) and 1.30 LSB, respectively.
Design , development and characterization of serratiopeptidase loaded albumin nanoparticles
Article history: Received on: 18/11/2014 Revised on: 09/12/2014 Accepted on: 11/01/2015 Available online: 27/02/2015 In recent years, albumin nanoparticles have been widely studied for delivery of various active pharmaceuticals with enhanced accumulation at the site of inflammation. Albumin is a versatile carrier to prepare nanoparticles and nanospheres due to its easy availability in pure form, biodegradability non-toxic and non-immunogenic nature. The mechanism of action of Serratiopeptidase appears to be hydrolysis of histamine, bradykinin and serotonin. Serratiopeptidase also has a proteolytic and fibrinolytic effect. Protein i.e. bovine serum albumin was used to entrap serratiopeptidase enzyme. Protease activity of the enzyme was checked and method was validated to access the active enzyme concentration during formulation. Solvent desolvation method was used for the preparation of BSA nanoparticles. Effect of buffer pH was checked on the enzyme activity. Chloroform was selected and used as solvent for nanoparticle preparation. Effect of various variables such as concentration of BSA, agitation rate, glutarldehyde concentration, time of crosslinking etc. on the formulation was studied. Formed nanoparticles were characterized for drug content, in-vitro release, entrapment efficiency, particle size and size distribution. The formed serratiopeptidase loaded albumin nanoparticles may be used for the treatment of arthritis.
Design of DLC Layer for wireless QoS
In this paper, we propose a structure of the DLC (data link control) protocol layer, which consists of the functional component, with radio resource channel allocation method. It is operated by the state of current traffic volume for the efficiency of radio resource utilization. Different adequate components will be taken by the current traffic state, especially fraction based data transmission buffer control method for the QoS (quality of service) assurance
Nanophotonics: Shrinking light-based technology
The study of light at the nanoscale has become a vibrant field of research, as researchers now master the flow of light at length scales far below the optical wavelength, largely surpassing the classical limits imposed by diffraction. Using metallic and dielectric nanostructures precisely sculpted into two-dimensional (2D) and 3D nanoarchitectures, light can be scattered, refracted, confined, filtered, and processed in fascinating new ways that are impossible to achieve with natural materials and in conventional geometries. This control over light at the nanoscale has not only unveiled a plethora of new phenomena but has also led to a variety of relevant applications, including new venues for integrated circuitry, optical computing, solar, and medical technologies, setting high expectations for many novel discoveries in the years to come.
Plasma proprotein convertase subtilisin-kexin type 9 is predominantly related to intermediate density lipoproteins.
OBJECTIVES Proprotein convertase subtilisin-kexin type 9 (PCSK9) is a key regulator of low density lipoprotein (LDL) receptor processing, but the PCSK9 pathway may also be implicated in the metabolism of triglyceride-rich lipoproteins. Here we determined the relationship of plasma PCSK9 with very low density lipoprotein (VLDL) and LDL subfractions. DESIGN AND METHODS The relationship of plasma PCSK9 (sandwich enzyme-linked immunosorbent assay) with 3 very low density lipoprotein (VLDL) and 3 low density lipoprotein (LDL) subfractions (nuclear magnetic resonance spectroscopy) was determined in 52 subjects (30 women). RESULTS In age- and sex-adjusted analysis plasma PCSK9 was correlated positively with total cholesterol, non-high density lipoprotein cholesterol and LDL cholesterol (r=0.516 to 0.547, all p<0.001), as well as with triglycerides (r=0.286, p=0.044). PCSK9 was correlated with the VLDL particle concentration (r=0.336, p=0.017) and with the LDL particle concentration (r=0.362, p=0.010), but only the relationship with the LDL particle concentration remained significant in multivariable linear regression analysis. In an analysis which included the 3 LDL subfractions, PCSK9 was independently related to intermediate density lipoproteins (IDL) (p<0.001), but not to other LDL subfractions. CONCLUSIONS This study suggests that plasma PCSK9 predominantly relates to IDL, a triglyceride-rich LDL subfraction. The PCSK9 pathway may affect plasma triglycerides via effects on the metabolism of triglyceride-rich LDL particles.
Modelling the Dynamic Joint Policy of Teammates with Attention Multi-agent DDPG
Modelling and exploiting teammates’ policies in cooperative multi-agent systems have long been an interest and also a big challenge for the reinforcement learning (RL) community. The interest lies in the fact that if the agent knows the teammates’ policies, it can adjust its own policy accordingly to arrive at proper cooperations; while the challenge is that the agents’ policies are changing continuously due to they are learning concurrently, which imposes difficulty to model the dynamic policies of teammates accurately. In this paper, we present ATTention Multi-Agent Deep Deterministic Policy Gradient (ATT-MADDPG) to address this challenge. ATT-MADDPG extends DDPG, a single-agent actor-critic RL method, with two special designs. First, in order to model the teammates’ policies, the agent should get access to the observations and actions of teammates. ATT-MADDPG adopts a centralized critic to collect such information. Second, to model the teammates’ policies using the collected information in an effective way, ATT-MADDPG enhances the centralized critic with an attention mechanism. This attention mechanism introduces a special structure to explicitly model the dynamic joint policy of teammates, making sure that the collected information can be processed efficiently. We evaluate ATT-MADDPG on both benchmark tasks and the real-world packet routing tasks. Experimental results show that it not only outperforms the state-of-the-art RL-based methods and rule-based methods by a large margin, but also achieves better performance in terms of scalability and robustness.
Machine-Learning Attacks on PolyPUFs, OB-PUFs, RPUFs, LHS-PUFs, and PUF–FSMs
A physically unclonable function (PUF) is a circuit of which the input– output behavior is designed to be sensitive to the random variations of its manufacturing process. This building block hence facilitates the authentication of any given device in a population of identically laid-out silicon chips, similar to the biometric authentication of a human. The focus and novelty of this work is the development of efficient impersonation attacks on the following five Arbiter PUF–based authentication protocols: (1) the so-called PolyPUF protocol of Konigsmark, Chen, and Wong, as published in the IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems in 2016, (2) the so-called OB-PUF protocol of Gao, Li, Ma, Al-Sarawi, Kavehei, Abbott, and Ranasinghe, as presented at the IEEE conference PerCom 2016, (3) the so-called RPUF protocol of Ye, Hu, and Li, as presented at the IEEE conference AsianHOST 2016, (4) the so-called LHS-PUF protocol of Idriss and Bayoumi, as presented at the IEEE conference RFID-TA 2017, and (5) the so-called PUF–FSM protocol of Gao, Ma, Al-Sarawi, Abbott, and Ranasinghe, as published in the IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems in 2018. The common flaw of all five designs is that the use of lightweight obfuscation logic provides insufficient protection against machine-learning attacks.
Social comparisons on social media: the impact of Facebook on young women's body image concerns and mood.
The present study experimentally investigated the effect of Facebook usage on women's mood and body image, whether these effects differ from an online fashion magazine, and whether appearance comparison tendency moderates any of these effects. Female participants (N=112) were randomly assigned to spend 10min browsing their Facebook account, a magazine website, or an appearance-neutral control website before completing state measures of mood, body dissatisfaction, and appearance discrepancies (weight-related, and face, hair, and skin-related). Participants also completed a trait measure of appearance comparison tendency. Participants who spent time on Facebook reported being in a more negative mood than those who spent time on the control website. Furthermore, women high in appearance comparison tendency reported more facial, hair, and skin-related discrepancies after Facebook exposure than exposure to the control website. Given its popularity, more research is needed to better understand the impact that Facebook has on appearance concerns.
Paleogeographic reconstruction and origin of the Philippine Sea
Seno, T. and Maruyama, S., 1984. Pal~geo~ap~c reconstruction and origin of the Philippine Sea. In: R.L. Carlson and K. Kobayasbi (Editors), Geodynamics of Back-arc Regions. ~ectonopkysics, 102: 53-84. Our reconstruction of the Philippine Sea suggests that it formed by two distinct episodes of back-arc spreading, each of which resulted from seaward retreat of the trench. In the first episode, the protoIzu-Bonin Trench retreated northward and the West Philippine Basin formed behind the northern half of the Palau-Kyushu Ridge. In the second episode, the Izu-Mariana Trench retreated eastward and the Shikoku and Parece Vela Basins formed behind it. During the last 17 Ma, the Philippine Sea basin has been moving northwestward with respect to Eurasia shifting the ‘ITT triple junction off central Japan westward by about 50 km. The motion of the P~tippine Sea with respect to Eurasia at the triple junction changed from north-no~hw~tward to west-northwestward 10-5 Ma ago. For the period before 17 Ma ago, we construct two models, retreating trench model and anchored slab model. The Izu-Bonin Trench migrated from south to northeast rotating in a clock-wise sense since 48 Ma ago in the retreating trench model. In the anchored slab model, the trench has been fixed with respect to Eurasian margin since 43 Ma ago. We prefer the retreating trench model because the deformation of the plate boundary along the eastern margin of Eurasia during 30-17 Ma ago is much simpler for this model than for the anchored slab model. Furthermore rotations of the Bonin-Mariana islands are consistent with those predicted from the retreating trench model. The 48 Ma ages of the northern part of the Palau-Kyushu Ridge and of Chichi-Jima of the Bonin Islands indicate that there was subduction beneath the northern half of the ridge beginning at least 48 Ma ago. From this and the subp~alle~sm in trend between the northern part of the Palau-Kyushu Ridge and the Central Basin Ridge, we propose that the major part of the West Philippine Basin formed by back-arc spreading in a N-S direction behind the northern part of the Palau-Kyushu Ridge. The Pacific plate was moving northward with respect to hot-spots from 48 to 43 Ma ago, which implies that the Pacific plate is not likely to have been subducting beneath the West Philippine Basin during this time. We speculate that 1 Present address: IntemationaI Institute of Seismology and Earthquake Engineering, Building Res. Inst., Tsukuba, Ibaraki Pref., Japan 305. * Present address: Dept. of Education, Toyama University, Gyofuku, Toyama, Toyama Pref., Japan 930-11.
The spread of fake news by social bots
The massive spread of fake news has been identified as a major global risk and has been alleged to influence elections and threaten democracies. Communication, cognitive, social, and computer scientists are engaged in efforts to study the complex causes for the viral diffusion of digital misinformation and to develop solutions, while search and social media platforms are beginning to deploy countermeasures. However, to date, these efforts have been mainly informed by anecdotal evidence rather than systematic data. Here we analyze 14 million messages spreading 400 thousand claims on Twitter during and following the 2016 U.S. presidential campaign and election. We find evidence that social bots play a key role in the spread of fake news. Accounts that actively spread misinformation are significantly more likely to be bots. Automated accounts are particularly active in the early spreading phases of viral claims, and tend to target influential users. Humans are vulnerable to this manipulation, retweeting bots who post false news. Successful sources of false and biased claims are heavily supported by social bots. These results suggest that curbing social bots may be an effective strategy for mitigating the spread of online misinformation.
Wireless access monitoring and control system based on digital door lock
We propose a novel wireless access monitoring and control system based on the digital door lock, which is explosively used as a digital consumer device. Digital door lock is an electronic locking system operated by the combination of digital key, security password or number codes. This paper presents a prototype of the proposed system and shows a scheme for the implementation. To implement the system with ZigBee network protocol, four types of modules are developed, ZigBee module, digital door lock module, human detection module, and ZigBee relay module. ZigBee module is designed to support wireless sensor network and also used for the ZigBee tag to identify the access objects. Digital door lock module is implemented as a digital consumer device to control the access system as well as locking system. It is very convenient system for the consumers and has extensible and flexible characteristics. That is, it can be used as a home security system by the ZigBee network with additional sensor devices. Therefore, it can be a good practical product for the realization of an access monitoring and control system. It also can be applied to the real market for home networking system. Furthermore, the system can be extended to another service such as a connection between mobile phone and home networking system.
Effectiveness and safety of salmeterol in nonspecialist practice settings.
STUDY OBJECTIVES To evaluate the effectiveness and safety of inhaled salmeterol in patients managed in nonspecialist practice settings. DESIGN A randomized, double-blind, 6-month, parallel-group study involving 253 centers. SETTING Primarily nonspecialist practices (n = 232). PATIENTS A total of 911 subjects (417 men; 494 women) who met American Thoracic Society asthma criteria were enrolled and randomized to treatment with either twice-daily salmeterol aerosol (50 microg; n = 455) or matching placebo twice daily (n = 456). Both groups were allowed to take salbutamol as needed. All subjects were previously treated with anti-inflammatory maintenance therapy that was continued throughout the study. MEASUREMENTS AND RESULTS The primary outcome variable was the proportion of subjects with serious asthma exacerbations defined as an exacerbation requiring hospitalization, emergency department visit, or use of prednisone during the treatment period. A total of 712 subjects competed the study. There was no significant difference in the proportion of subjects experiencing serious exacerbations between the salmeterol and placebo groups (20.8% vs 20.9%, respectively; p = 0.935; power > 88%). Peak expiratory flow was significantly higher in the salmeterol group (398 L/min vs 386 L/min for placebo; p < 0.01). Median daily use of salbutamol was two inhalations for the salmeterol group and three inhalations for placebo (p < 0.001). The proportion of subjects sleeping through the night was significantly higher in the salmeterol group (74%) as compared to placebo (68%; p = 0.028). CONCLUSIONS Salmeterol treatment is effective in subjects typically cared for in the primary-care setting and does not increase the frequency of severe exacerbations.
Encoding Reality: Prediction-Assisted Cortical Learning Algorithm in Hierarchical Temporal Memory
In the decade since Jeff Hawkins proposed Hierarchical Temporal Memory (HTM) as a model of neocortical computation, the theory and the algorithms have evolved dramatically. This paper presents a detailed description of HTM’s Cortical Learning Algorithm (CLA), including for the first time a rigorous mathematical formulation of all aspects of the computations. Prediction Assisted CLA (paCLA), a refinement of the CLA, is presented, which is both closer to the neuroscience and adds significantly to the computational power. Finally, we summarise the key functions of neocortex which are expressed in paCLA implementations. An Open Source project, Comportex, is the leading implementation of this evolving theory of the brain.
Epoch synchronous non-overlap-add (ESNOLA) method-based concatenative speech synthesis system for Bangla
In the last decade there has been a shift towards development of speech synthesizer using concatenative synthesis technique instead of parametric synthesis. There are a number of different methodologies for concatenative synthesis like TDPSOLA, PSOLA, and MBROLA. This paper, describes a concatenative speech synthesis system based on Epoch Synchronous Non Over Lapp Add (ESNOLA) technique, for standard colloquial Bengali, which uses the partnemes as the smallest signal units for concatenation. The system provided full control for prosody and intonation.
Convolutional Neural Networks vs. Convolution Kernels: Feature Engineering for Answer Sentence Reranking
In this paper, we study, compare and combine two state-of-the-art approaches to automatic feature engineering: Convolution Tree Kernels (CTKs) and Convolutional Neural Networks (CNNs) for learning to rank answer sentences in a Question Answering (QA) setting. When dealing with QA, the key aspect is to encode relational information between the constituents of question and answer in learning algorithms. For this purpose, we propose novel CNNs using relational information and combined them with relational CTKs. The results show that (i) both approaches achieve the state of the art on a question answering task, where CTKs produce higher accuracy and (ii) combining such methods leads to unprecedented high results.
Minimal Intelligence Agents for Bargaining Behaviors in Market Based Environments
This report describes simple mechanisms that allow autonomous software agents to en gage in bargaining behaviors in market based environments Groups of agents with such mechanisms could be used in applications including market based control internet com merce and economic modelling After an introductory discussion of the rationale for this work and a brief overview of key concepts from economics work in market based control is reviewed to highlight the need for bargaining agents Following this the early experimental economics work of Smith and the recent results of Gode and Sunder are de scribed Gode and Sunder s work using zero intelligence zi traders that act randomly within a structured market appears to imply that convergence to the theoretical equilib rium price is determined more by market structure than by the intelligence of the traders in that market if this is true developing mechanisms for bargaining agents is of very limited relevance However it is demonstrated here that the average transaction prices of zi traders can vary signi cantly from the theoretical equilibrium level when supply and demand are asymmetric and that the degree of di erence from equilibrium is predictable from a pri ori statistical analysis In this sense it is shown here that Gode and Sunder s results are artefacts of their experimental regime Following this zero intelligence plus zip traders are introduced like zi traders these simple agents make stochastic bids Unlike zi traders they employ an elementary form of machine learning Groups of zip traders interacting in experimental markets similar to those used by Smith and Gode and Sunder are demonstrated and it is shown that the performance of zip traders is signi cantly closer to the human data than is the performance of Gode and Sunder s zi traders This document reports on work done during February to September while the author held a Visiting Academic post at Hewlett Packard Laboratories Bristol Filton Road Bristol BS QZ U K
Differences in energy expenditure during high-speed versus standard-speed yoga: A randomized sequence crossover trial.
OBJECTIVES To compare energy expenditure and volume of oxygen consumption and carbon dioxide production during a high-speed yoga and a standard-speed yoga program. DESIGN Randomized repeated measures controlled trial. SETTING A laboratory of neuromuscular research and active aging. INTERVENTIONS Sun-Salutation B was performed, for eight minutes, at a high speed versus and a standard-speed separately while oxygen consumption was recorded. Caloric expenditure was calculated using volume of oxygen consumption and carbon dioxide production. MAIN OUTCOME MEASURES Difference in energy expenditure (kcal) of HSY and SSY. RESULTS Significant differences were observed in energy expenditure between yoga speeds with high-speed yoga producing significantly higher energy expenditure than standard-speed yoga (MD=18.55, SE=1.86, p<0.01). Significant differences were also seen between high-speed and standard-speed yoga for volume of oxygen consumed and carbon dioxide produced. CONCLUSIONS High-speed yoga results in a significantly greater caloric expenditure than standard-speed yoga. High-speed yoga may be an effective alternative program for those targeting cardiometabolic markers.
Clinical features and outcomes of gastric variceal bleeding: retrospective Korean multicenter data
BACKGROUND/AIMS While gastric variceal bleeding (GVB) is not as prevalent as esophageal variceal bleeding, it is reportedly more serious, with high failure rates of the initial hemostasis (>30%), and has a worse prognosis than esophageal variceal bleeding. However, there is limited information regarding hemostasis and the prognosis for GVB. The aim of this study was to determine retrospectively the clinical outcomes of GVB in a multicenter study in Korea. METHODS The data of 1,308 episodes of GVB (males:females=1062:246, age=55.0±11.0 years, mean±SD) were collected from 24 referral hospital centers in South Korea between March 2003 and December 2008. The rates of initial hemostasis failure, rebleeding, and mortality within 5 days and 6 weeks of the index bleed were evaluated. RESULTS The initial hemostasis failed in 6.1% of the patients, and this was associated with the Child-Pugh score [odds ratio (OR)=1.619; P<0.001] and the treatment modality: endoscopic variceal ligation, endoscopic variceal obturation, and balloon-occluded retrograde transvenous obliteration vs. endoscopic sclerotherapy, transjugular intrahepatic portosystemic shunt, and balloon tamponade (OR=0.221, P<0.001). Rebleeding developed in 11.5% of the patients, and was significantly associated with Child-Pugh score (OR=1.159, P<0.001) and treatment modality (OR=0.619, P=0.026). The GVB-associated mortality was 10.3%; mortality in these cases was associated with Child-Pugh score (OR=1.795, P<0.001) and the treatment modality for the initial hemostasis (OR=0.467, P=0.001). CONCLUSIONS The clinical outcome for GVB was better for the present cohort than in previous reports. Initial hemostasis failure, rebleeding, and mortality due to GVB were universally associated with the severity of liver cirrhosis.
Early Gestational Weight Gain Rate and Adverse Pregnancy Outcomes in Korean Women
During pregnancy, many women gain excessive weight, which is related to adverse maternal and neonatal outcomes. In this study, we evaluated whether rate of gestational weight gain (RGWG) in early, mid, and late pregnancy is strongly associated with adverse pregnancy outcomes. A retrospective chart review of 2,789 pregnant Korean women was performed. Weights were recorded at the first clinic visit, during the screening test for fetal anomaly, and during the 50g oral glucose challenge test and delivery, to represent early, mid, and late pregnancy, respectively. A multivariate logistic regression analysis was performed to examine the relationship between RGWG and adverse pregnancy outcomes. At early pregnancy, the RGWG was significantly associated with high risk of developing gestational diabetes mellitus (GDM), pregnancy-induced hypertension (PIH), large for gestational age (LGA) infants, macrosomia, and primary cesarean section (P-CS). The RGWG of mid pregnancy was not significantly associated with any adverse pregnancy outcomes. The RGWG at late pregnancy was significantly associated with a lower risk of developing GDM, preterm birth and P-CS, but with a higher risk of developing LGA infants and macrosomia. When the subjects were divided into three groups (Underweight, Normal, and Obese), based on pre-pregnancy body mass index (BMI), the relationship between early RGWG and adverse pregnancy outcomes was significantly different across the three BMI groups. At early pregnancy, RGWG was not significantly associated to adverse pregnancy outcomes for subjects in the Underweight group. In the Normal group, however, early RGWG was significantly associated with GDM, PIH, LGA infants, macrosomia, P-CS, and small for gestational weight (SGA) infants, whereas early RGWG was significantly associated with only a high risk of PIH in the Obese group. The results of our study suggest that early RGWG is significantly associated with various adverse pregnancy outcomes and that proper preemptive management of early weight gain, particularly in pregnant women with a normal or obese pre-pregnancy BMI, is necessary to reduce the risk of developing adverse pregnancy outcomes.
Tracking the evolutionary origins of dog-human cooperation: the “Canine Cooperation Hypothesis”
At present, beyond the fact that dogs can be easier socialized with humans than wolves, we know little about the motivational and cognitive effects of domestication. Despite this, it has been suggested that during domestication dogs have become socially more tolerant and attentive than wolves. These two characteristics are crucial for cooperation, and it has been argued that these changes allowed dogs to successfully live and work with humans. However, these domestication hypotheses have been put forward mainly based on dog-wolf differences reported in regard to their interactions with humans. Thus, it is possible that these differences reflect only an improved capability of dogs to accept humans as social partners instead of an increase of their general tolerance, attentiveness and cooperativeness. At the Wolf Science Center, in order to detangle these two explanations, we raise and keep dogs and wolves similarly socializing them with conspecifics and humans and then test them in interactions not just with humans but also conspecifics. When investigating attentiveness toward human and conspecific partners using different paradigms, we found that the wolves were at least as attentive as the dogs to their social partners and their actions. Based on these findings and the social ecology of wolves, we propose the Canine Cooperation Hypothesis suggesting that wolves are characterized with high social attentiveness and tolerance and are highly cooperative. This is in contrast with the implications of most domestication hypotheses about wolves. We argue, however, that these characteristics of wolves likely provided a good basis for the evolution of dog-human cooperation.
A Quantitative Comparison of the Subgraph Miners MoFa, gSpan, FFSM, and Gaston
Several new miners for frequent subgraphs have been published recently. Whereas new approaches are presented in detail, the quantitative evaluations are often of limited value: only the performance on a small set of graph databases is discussed and the new algorithm is often only compared to a single competitor based on an executable. It remains unclear, how the algorithms work on bigger/other graph databases and which of their distinctive features is best suited for which database. We have re-implemented the subgraph miners MoFa, gSpan, FFSM, and Gaston within a common code base and with the same level of programming expertise and optimization effort. This paper presents the results of a comparative benchmarking that ran the algorithms on a comprehensive set of graph databases.
Modulation of event-related desynchronization during motor imagery with transcranial direct current stimulation (tDCS) in patients with chronic hemiparetic stroke
Electroencephalogram-based brain–computer interface (BCI) has been developed as a new neurorehabilitative tool for patients with severe hemiparesis. However, its application has been limited because of difficulty detecting stable brain signals from the affected hemisphere. It has been reported that transcranial direct current stimulation (tDCS) can modulate event-related desynchronization (ERD) in healthy persons. The objective of this study was to test the hypothesis that anodal tDCS could modulate ERD in patients with severe hemiparetic stroke. The participants were six patients with chronic hemiparetic stroke (mean age, 56.8 ± 9.5 years; mean time from the onset, 70.0 ± 19.6 months; Fugl-Meyer Assessment upper extremity motor score, 30.8 ± 16.5). We applied anodal tDCS (10 min, 1 mA) and sham stimulation over the affected primary motor cortex in a random order. ERD of the mu rhythm (mu ERD) with motor imagery of extension of the affected finger was assessed before and after anodal tDCS and sham stimulation. Mu ERD of the affected hemisphere increased significantly after anodal tDCS, whereas it did not change after sham stimulation. Our results show that anodal tDCS can increase mu ERD in patients with hemiparetic stroke, indicating that anodal tDCS could be used as a conditioning tool for BCI in stroke patients.
Effect of a thromboxane A(2) antagonist on sputum production and its physicochemical properties in patients with mild to moderate asthma.
STUDY OBJECTIVE To determine the effects of a specific thromboxane A(2) (TxA(2)) receptor antagonist, seratrodast, on asthma control and airway secretions. DESIGN Multicenter, double-blind, randomized, placebo-controlled study. PATIENTS Forty-five patients with mild to moderate asthma who had been continuously expectorating sputum of > 20 g/d. Patients with a current pulmonary infection or taking oral corticosteroids, antibiotics, or mucolytic agents were excluded from the trial. INTERVENTIONS Following a 2-week run-in period, while pulmonary function, sputum production, and mucociliary function were assessed, patients were assigned to receive seratrodast, 40 mg/d, or placebo for 6 weeks. MEASUREMENTS AND RESULTS During the treatment period, the changes in FEV(1) and peak expiratory flow (PEF) were not different between the two patient groups, but there were significant reductions in diurnal variation of PEF (p = 0.034), frequency of daytime asthma symptoms (p = 0.030), and daytime supplemental use of beta(2)-agonist (p = 0.032) in the seratrodast group. For sputum analysis, seratrodast treatment decreased the amount of sputum (p = 0.005), dynamic viscosity (p = 0. 007), and albumin concentration (p = 0.028), whereas it had no effect on elastic modulus or fucose concentration. Nasal clearance time of a saccharin particle was shortened in the seratrodast group at week 4 (p = 0.031) and week 6 (p = 0.025), compared with the placebo group. CONCLUSION Blockade of TxA(2) receptor has minimal effects on pulmonary function, but may cause an improvement in mucociliary clearance by decreasing the viscosity of airway secretions.
Attention U-Net: Learning Where to Look for the Pancreas
We propose a novel attention gate (AG) model for medical imaging that automatically learns to focus on target structures of varying shapes and sizes. Models trained with AGs implicitly learn to suppress irrelevant regions in an input image while highlighting salient features useful for a specific task. This enables us to eliminate the necessity of using explicit external tissue/organ localisation modules of cascaded convolutional neural networks (CNNs). AGs can be easily integrated into standard CNN architectures such as the U-Net model with minimal computational overhead while increasing the model sensitivity and prediction accuracy. The proposed Attention U-Net architecture is evaluated on two large CT abdominal datasets for multi-class image segmentation. Experimental results show that AGs consistently improve the prediction performance of U-Net across different datasets and training sizes while preserving computational efficiency. The source code for the proposed architecture is publicly available.
Hairpin bandpass filter with tunable center frequency and tunable bandwidth based on screen printed ferroelectric varactors
This paper presents a twofold tunable planar hairpin filter to simultaneously control center frequency and bandwidth. Tunability is achieved by using functional thick film layers of the ferroelectric material Barium-Strontium-Titanate (BST). The center frequency of the filter is adjusted by varactors which are loading the hairpin resonators. Coupling varactors between the hairpin resonators enable the control of the bandwidth. The proposed filter structure is designed for a center frequency range from 650 MHz to 920 MHz and a bandwidth between 25 MHz and 85 MHz. This covers the specifications of the lower GSM bands. The functionality of the design is experimentally validated and confirmed by simulation results.
Privacy in mobile technology for personal healthcare
Information technology can improve the quality, efficiency, and cost of healthcare. In this survey, we examine the privacy requirements of mobile computing technologies that have the potential to transform healthcare. Such mHealth technology enables physicians to remotely monitor patients' health and enables individuals to manage their own health more easily. Despite these advantages, privacy is essential for any personal monitoring technology. Through an extensive survey of the literature, we develop a conceptual privacy framework for mHealth, itemize the privacy properties needed in mHealth systems, and discuss the technologies that could support privacy-sensitive mHealth systems. We end with a list of open research questions.
Graph Convolutional Neural Networks via Scattering
We generalize the scattering transform to graphs and consequently construct a convolutional neural network on graphs. We show that under certain conditions, any feature generated by such a network is approximately invariant to permutations and stable to graph manipulations. Numerical results demonstrate competitive performance on relevant datasets.
Inferring templates from spreadsheets
We present a study investigating the performance of a system for automatically inferring spreadsheet templates. These templates allow users to safely edit spreadsheets, that is, certain kinds of errors such as range, reference, and type errors can be provably prevented. Since the inference of templates is inherently ambiguous, such a study is required to demonstrate the effectiveness of any such automatic system. The study results show that the system considered performs significantly better than subjects with intermediate to expert level programming expertise. These results are important because the translation of the huge body of existing spreadsheets into a system based on safety-guaranteeing templates cannot be performed without automatic support. We also carried out post-hoc analyses of the video recordings of the subjects' interactions with the spreadsheets and found that although expert-level subjects needed less time and developed more accurate templates than less experienced subjects, they did not inspect fewer cells in the spreadsheet. %and found that expert-level subjects spend less time and inspect fewer cells in the spreadsheet and develop more accurate templates than subjects with less experience.
Direct Entry of Gadolinium into the Vestibule Following Intratympanic Applications in Guinea Pigs and the Influence of Cochlear Implantation
Although intratympanic (IT) administration of drugs has gained wide clinical acceptance, the distribution of drugs in the inner ear following IT administration is not well established. Gadolinium (Gd) has been previously used as a marker in conjunction with magnetic resonance imaging (MRI) to visualize distribution in inner ear fluids in a qualitative manner. In the present study, we applied gadolinium chelated with diethylenetriamine penta-acetic acid (Gd-DTPA) to the round window niche of 12 guinea pigs using SeprapackTM (carboxlmethylcellulose-hyaluronic acid) pledgets which stabilized the fluid volume in the round window niche. Gd-DTPA distribution was monitored sequentially with time following application. Distribution in normal, unperforated ears was compared with ears that had undergone a cochleostomy in the basal turn of scala tympani and implantation with a silastic electrode. Results were quantified using image analysis software. In all animals, Gd-DTPA was seen in the lower basal scala tympani (ST), scala vestibuli (SV), and throughout the vestibule and semi-circular canals by 1 h after application. Although Gd-DTPA levels in ST were higher than those in the vestibule in a few ears, the majority showed higher Gd-DTPA levels in the vestibule than ST at both early and later time points. Quantitative computer simulations of the experiment, taking into account the larger volume of the vestibule compared to scala tympani, suggest most Gd-DTPA (up to 90%) entered the vestibule directly in the vicinity of the stapes rather than indirectly through the round window membrane and ST. Gd-DTPA levels were minimally affected by the implantation procedure after 1 h. Gd-DTPA levels in the basal turn of scala tympani were lower in implanted animals, but the difference compared to non-implanted ears did not reach statistical significance.
Cooling via one hand improves physical performance in heat-sensitive individuals with Multiple Sclerosis: A preliminary study
BACKGROUND Many individuals afflicted with multiple sclerosis (MS) experience a transient worsening of symptoms when body temperature increases due to ambient conditions or physical activity. Resulting symptom exacerbations can limit performance. We hypothesized that extraction of heat from the body through the subcutaneous retia venosa that underlie the palmar surfaces of the hands would reduce exercise-related heat stress and thereby increase the physical performance capacity of heat-sensitive individuals with MS. METHODS Ten ambulatory MS patients completed one or more randomized paired trials of walking on a treadmill in a temperate environment with and without cooling. Stop criteria were symptom exacerbation and subjective fatigue. The cooling treatment entailed inserting one hand into a rigid chamber through an elastic sleeve that formed an airtight seal around the wrist. A small vacuum pump created a -40 mm Hg subatmospheric pressure enviinside the chamber where the palmar surface of the hand rested on a metal surface maintained at 18-22 degrees C. During the treatment trials, the device was suspended from above the treadmill on a bungee cord so the subjects could comfortably keep a hand in the device without having to bear its weight while walking on the treadmill. RESULTS When the trials were grouped by treatment only, cooling treatment increased exercise durations by 33% (43.6 +/- 17.1 min with treatment vs. 32.8 +/- 10.9 min. without treatment, mean +/- SD, p < 5.0.10-6, paired t-test, n = 26). When the average values were calculated for the subjects who performed multiple trials before the treatment group results were compared, cooling treatment increased exercise duration by 35% (42.8 +/- 16.4 min with treatment vs. 31.7 +/- 9.8 min. without treatment, mean +/- SD, p < 0.003, paired t-test, n = 10). CONCLUSION These preliminary results suggest that utilization of the heat transfer capacity of the non-hairy skin surfaces can enable temperature-sensitive individuals with MS to extend participation in day-to-day physical activities despite thermally stressful conditions. However, systematic longitudinal studies in larger cohorts of MS patients with specific deficits and levels of disability conducted under a variety of test conditions are needed to confirm these preliminary findings.
A Shoulder Surfing Resistant Graphical Authentication System
Authentication based on passwords is used largely in applications for computer security and privacy. However, human actions such as choosing bad passwords and inputting passwords in an insecure way are regarded as “the weakest link” in the authentication chain. Rather than arbitrary alphanumeric strings, users tend to choose passwords either short or meaningful for easy memorization. With web applications and mobile apps piling up, people can access these applications anytime and anywhere with various devices. This evolution brings great convenience but also increases the probability of exposing passwords to shoulder surfing attacks. Attackers can observe directly or use external recording devices to collect users’ credentials. To overcome this problem, we proposed a novel authentication system PassMatrix, based on graphical passwords to resist shoulder surfing attacks. With a one-time valid login indicator and circulative horizontal and vertical bars covering the entire scope of pass-images, PassMatrix offers no hint for attackers to figure out or narrow down the password even they conduct multiple camera-based attacks. We also implemented a PassMatrix prototype on Android and carried out real user experiments to evaluate its memorability and usability. From the experimental result, the proposed system achieves better resistance to shoulder surfing attacks while maintaining usability.
Can virtual reality improve anatomy education? A randomised controlled study of a computer-generated three-dimensional anatomical ear model.
INTRODUCTION The use of computer-generated 3-dimensional (3-D) anatomical models to teach anatomy has proliferated. However, there is little evidence that these models are educationally effective. The purpose of this study was to test the educational effectiveness of a computer-generated 3-D model of the middle and inner ear. METHODS We reconstructed a fully interactive model of the middle and inner ear from a magnetic resonance imaging scan of a human cadaver ear. To test the model's educational usefulness, we conducted a randomised controlled study in which 28 medical students completed a Web-based tutorial on ear anatomy that included the interactive model, while a control group of 29 students took the tutorial without exposure to the model. At the end of the tutorials, both groups were asked a series of 15 quiz questions to evaluate their knowledge of 3-D relationships within the ear. RESULTS The intervention group's mean score on the quiz was 83%, while that of the control group was 65%. This difference in means was highly significant (P < 0.001). DISCUSSION Our findings stand in contrast to the handful of previous randomised controlled trials that evaluated the effects of computer-generated 3-D anatomical models on learning. The equivocal and negative results of these previous studies may be due to the limitations of these studies (such as small sample size) as well as the limitations of the models that were studied (such as a lack of full interactivity). Given our positive results, we believe that further research is warranted concerning the educational effectiveness of computer-generated anatomical models.
The Effect of Syndesmosis Screw Removal on the Reduction of the Distal Tibiofibular Joint: A Prospective Radiographic Study.
BACKGROUND Injury to the tibiofibular syndesmosis is frequent with rotational ankle injuries. Multiple studies have shown a high rate of syndesmotic malreduction with the placement of syndesmotic screws. There are no studies evaluating the reduction or malreduction of the syndesmosis after syndesmotic screw removal. The purpose of this study was to prospectively evaluate syndesmotic reduction with CT scans and to determine the effect of screw removal on the malreduced syndesmosis. METHODS This was an IRB-approved prospective radiographic study. Patients over 18 years of age treated at 1 institution between August 2008 and December 2011 with intraoperative evidence of syndesmotic disruption were enrolled. Postoperative CT scans were obtained of bilateral ankles within 2 weeks of operative fixation. Syndesmotic screws were removed after 3 months, and a second CT scan was then obtained 30 days after screw removal. Using axial CT images, syndesmotic reduction was evaluated compared to the contralateral uninjured ankle. Twenty-five patients were enrolled in this prospective study. The average age was 25.7 (range, 19 to 35), with 3 females and 22 males. RESULTS Nine patients (36%) had evidence of tibiofibular syndesmosis malreduction on their initial postoperative axial CT scans. In the postsyndesmosis screw removal CT scan, 8 of 9 or 89% of malreductions showed adequate reduction of the tibiofibular syndesmosis. There was a statistically significant reduction in syndesmotic malreductions ( t = 3.333, P < .001) between the initial rate of malreduction after screw placement of 36% (9/25) and the rate of malreduction after all screws were removed of 4% (1/25). CONCLUSIONS Despite a high rate of initial malreduction (36%) after syndesmosis screw placement, 89% of the malreduced syndesmoses spontaneously reduced after screw removal. Syndesmotic screw removal may be advantageous to achieve final anatomic reduction of the distal tibiofibular joint, and we recommend it for the malreduced syndesmosis. LEVEL OF EVIDENCE Level IV, prognostic case series.
Annona muricata (Annonaceae): A Review of Its Traditional Uses, Isolated Acetogenins and Biological Activities
Annona muricata is a member of the Annonaceae family and is a fruit tree with a long history of traditional use. A. muricata, also known as soursop, graviola and guanabana, is an evergreen plant that is mostly distributed in tropical and subtropical regions of the world. The fruits of A. muricata are extensively used to prepare syrups, candies, beverages, ice creams and shakes. A wide array of ethnomedicinal activities is contributed to different parts of A. muricata, and indigenous communities in Africa and South America extensively use this plant in their folk medicine. Numerous investigations have substantiated these activities, including anticancer, anticonvulsant, anti-arthritic, antiparasitic, antimalarial, hepatoprotective and antidiabetic activities. Phytochemical studies reveal that annonaceous acetogenins are the major constituents of A. muricata. More than 100 annonaceous acetogenins have been isolated from leaves, barks, seeds, roots and fruits of A. muricata. In view of the immense studies on A. muricata, this review strives to unite available information regarding its phytochemistry, traditional uses and biological activities.
Autonomous vehicles safe-optimal trajectory selection based on big data analysis and predefined user preferences
Autonomous Vehicle (AV) or self-driving vehicle technology promises to provide many economical and societal benefits and impacts. Safety is on the top of these benefits. Trajectory or path planning is one of the essential and critical tasks in operating the autonomous vehicle. In this paper we are tackling the problem of trajectory planning for fully-autonomous vehicles. Our use cases are designed for autonomous vehicles in a cloud based connected vehicle environment. This paper presents a method for selecting safe-optimal trajectory in autonomous vehicles. Selecting the safe trajectory in our work mainly based on using Big Data mining and analysis of real-life accidents data and real-time connected vehicles' data. The decision of selecting this trajectory is done automatically without any human intervention. The human touches in this scenario could be only at defining and prioritizing the driving preferences and concerns at the beginning of the planned trip. Safety always overrides the ranked user preferences listed in this work. The output of this work is a safe trajectory that represented by the position, ETA, distance, and the estimated fuel consumption for the entire trip.
Radiation in a Fractal Cosmology
It is shown that Homogeneous radiation can not be included in the Fractal Cosmological model obtained earlier by assuming an isotropic fractal cosmography, General Relativity and the Copernican Principle.
The Effect of Probiotics on Childhood Constipation: A Randomized Controlled Double Blind Clinical Trial
Background. Inconsistent data exist about the role of probiotics in the treatment of constipated children. The aim of this study was to investigate the effectiveness of probiotics in childhood constipation. Materials and Methods. In this placebo controlled trial, fifty-six children aged 4-12 years with constipation received randomly lactulose plus Protexin or lactulose plus placebo daily for four weeks. Stool frequency and consistency, abdominal pain, fecal incontinence, and weight gain were studied at the beginning, after the first week, and at the end of the 4th week in both groups. Results. Forty-eight patients completed the study. At the end of the fourth week, the frequency and consistency of defecation improved significantly (P = 0.042 and P = 0.049, resp.). At the end of the first week, fecal incontinence and abdominal pain improved significantly in intervention group (P = 0.030 and P = 0.017, resp.) but, at the end of the fourth week, this difference was not significant (P = 0.125 and P = 0.161, resp.). A significant weight gain was observed at the end of the 1st week in the treatment group. Conclusion. This study showed that probiotics had a positive role in increasing the frequency and improving the consistency at the end of 4th week.
Local-Learning-Based Feature Selection for High-Dimensional Data Analysis
This paper considers feature selection for data classification in the presence of a huge number of irrelevant features. We propose a new feature-selection algorithm that addresses several major issues with prior work, including problems with algorithm implementation, computational complexity, and solution accuracy. The key idea is to decompose an arbitrarily complex nonlinear problem into a set of locally linear ones through local learning, and then learn feature relevance globally within the large margin framework. The proposed algorithm is based on well-established machine learning and numerical analysis techniques, without making any assumptions about the underlying data distribution. It is capable of processing many thousands of features within minutes on a personal computer while maintaining a very high accuracy that is nearly insensitive to a growing number of irrelevant features. Theoretical analyses of the algorithm's sample complexity suggest that the algorithm has a logarithmical sample complexity with respect to the number of features. Experiments on 11 synthetic and real-world data sets demonstrate the viability of our formulation of the feature-selection problem for supervised learning and the effectiveness of our algorithm.
Empirical, theoretical, and practical advantages of the HEXACO model of personality structure.
The authors argue that a new six-dimensional framework for personality structure--the HEXACO model--constitutes a viable alternative to the well-known Big Five or five-factor model. The new model is consistent with the cross-culturally replicated finding of a common six-dimensional structure containing the factors Honesty-Humility (H), Emotionality (E), eExtraversion (X), Agreeableness (A), Conscientiousness (C), and Openness to Experience (O). Also, the HEXACO model predicts several personality phenomena that are not explained within the B5/FFM, including the relations of personality factors with theoretical biologists' constructs of reciprocal and kin altruism and the patterns of sex differences in personality traits. In addition, the HEXACO model accommodates several personality variables that are poorly assimilated within the B5/FFM.
Impact of enhanced geothermal systems on US energy supply in the twenty-first century.
Recent national focus on the value of increasing US supplies of indigenous renewable energy underscores the need for re-evaluating all alternatives, particularly those that are large and well distributed nationally. A panel was assembled in September 2005 to evaluate the technical and economic feasibility of geothermal becoming a major supplier of primary energy for US base-load generation capacity by 2050. Primary energy produced from both conventional hydrothermal and enhanced (or engineered) geothermal systems (EGS) was considered on a national scale. This paper summarizes the work of the panel which appears in complete form in a 2006 MIT report, 'The future of geothermal energy' parts 1 and 2. In the analysis, a comprehensive national assessment of US geothermal resources, evaluation of drilling and reservoir technologies and economic modelling was carried out. The methodologies employed to estimate geologic heat flow for a range of geothermal resources were utilized to provide detailed quantitative projections of the EGS resource base for the USA. Thirty years of field testing worldwide was evaluated to identify the remaining technology needs with respect to drilling and completing wells, stimulating EGS reservoirs and converting geothermal heat to electricity in surface power and energy recovery systems. Economic modelling was used to develop long-term projections of EGS in the USA for supplying electricity and thermal energy. Sensitivities to capital costs for drilling, stimulation and power plant construction, and financial factors, learning curve estimates, and uncertainties and risks were considered.
Effects of enhanced caregiver training program on cancer caregiver’s self-efficacy, preparedness, and psychological well-being
We examined the effects of an enhanced informal caregiver training (Enhanced-CT) protocol in cancer symptom and caregiver stress management to caregivers of hospitalized cancer patients. We recruited adult patients in oncology units and their informal caregivers. We utilized a two-armed, randomized controlled trial design with data collected at baseline, post-training, and at 2 and 4 weeks after hospital discharge. Primary outcomes were self-efficacy for managing patients’ cancer symptoms and caregiver stress and preparedness for caregiving. Secondary outcomes were caregiver depression, anxiety, and burden. The education comparison (EDUC) group received information about community resources. We used general linear models to test for differences in the Enhanced-CT relative to the EDUC group. We consented and randomized 138 dyads: Enhanced-CT = 68 and EDUC = 70. The Enhanced-CT group had a greater increase in caregiver self-efficacy for cancer symptom management and stress management and preparation for caregiving at the post-training assessment compared to the EDUC group but not at 2- and 4-week post-discharge assessments. There were no intervention group differences in depression, anxiety, and burden. An Enhanced-CT protocol resulted in short-term improvements in self-efficacy for managing patients’ cancer symptoms and caregiver stress and preparedness for caregiving but not in caregivers’ psychological well-being. The lack of sustained effects may be related to the single-dose nature of our intervention and the changing needs of informal caregivers after hospital discharge.
Warp: Lightweight Multi-Key Transactions for Key-Value Stores
Traditional NoSQL systems scale by sharding data across multiple servers and by performing each operation on a small number of servers. Because transactions necessarily require coordination across multiple servers, NoSQL systems often explicitly avoid making transactional guarantees in order to avoid such coordination. Past work in this space has relied either on heavyweight protocols, such as two-phase commit or Paxos, or clock synchronization to perform this coordination. This paper presents a novel protocol for providing ACID transactions on top of a sharded data store. Called linear transactions, this protocol allows transactions to execute in natural arrival order unless doing so would violate serializability. We have fully implemented linear transactions in a commercially available data store. Experiments show that Warp achieves 3.2× higher throughput than Sinfonia’s mini-transactions on the standard TPC-C benchmark with no aborts. Further, the system achieves 96% the throughput of HyperDex even though HyperDex makes no transactional guarantees.
Quantum-Mechanical Computers
E very two years for the past 50, computers have become twice as fast while their components have become half as big. Circuits now contain wires and transistors that measure only one hundredth of a human hair in width. Because of this explosive progress, today’s machines are millions of times more powerful than their crude ancestors. But explosions do eventually dissipate, and integrated-circuit technology is running up against its limits. Advanced lithographic techniques can yield parts /100 the size of what is currently available. But at this scale—where bulk matter reveals itself as a crowd of individual atoms— integrated circuits barely function. A tenth the size again, the individuals assert their identity, and a single defect can wreak havoc. So if computers are to become much smaller in the future, new technology must replace or supplement what we now have.
Multiple effects of silymarin on the hepatitis C virus lifecycle.
UNLABELLED Silymarin, an extract from milk thistle (Silybum marianum), and its purified flavonolignans have been recently shown to inhibit hepatitis C virus (HCV) infection, both in vitro and in vivo. In the current study, we further characterized silymarin's antiviral actions. Silymarin had antiviral effects against hepatitis C virus cell culture (HCVcc) infection that included inhibition of virus entry, RNA and protein expression, and infectious virus production. Silymarin did not block HCVcc binding to cells but inhibited the entry of several viral pseudoparticles (pp), and fusion of HCVpp with liposomes. Silymarin but not silibinin inhibited genotype 2a NS5B RNA-dependent RNA polymerase (RdRp) activity at concentrations 5 to 10 times higher than required for anti-HCVcc effects. Furthermore, silymarin had inefficient activity on the genotype 1b BK and four 1b RDRPs derived from HCV-infected patients. Moreover, silymarin did not inhibit HCV replication in five independent genotype 1a, 1b, and 2a replicon cell lines that did not produce infectious virus. Silymarin inhibited microsomal triglyceride transfer protein activity, apolipoprotein B secretion, and infectious virion production into culture supernatants. Silymarin also blocked cell-to-cell spread of virus. CONCLUSION Although inhibition of in vitro NS5B polymerase activity is demonstrable, the mechanisms of silymarin's antiviral action appear to include blocking of virus entry and transmission, possibly by targeting the host cell.
Understanding Data Augmentation for Classification: When to Warp?
In this paper we investigate the benefit of augmenting data with synthetically created samples when training a machine learning classifier. Two approaches for creating additional training samples are data warping, which generates additional samples through transformations applied in the data-space, and synthetic over-sampling, which creates additional samples in feature-space. We experimentally evaluate the benefits of data augmentation for a convolutional backpropagation-trained neural network, a convolutional support vector machine and a convolutional extreme learning machine classifier, using the standard MNIST handwritten digit dataset. We found that while it is possible to perform generic augmentation in feature-space, if plausible transforms for the data are known then augmentation in data-space provides a greater benefit for improving performance and reducing overfitting.
A SEALANT for Inter-App Security Holes in Android
Android's communication model has a major security weakness: malicious apps can manipulate other apps into performing unintended operations and can steal end-user data, while appearing ordinary and harmless. This paper presents SEALANT, a technique that combines static analysis of app code, which infers vulnerable communication channels, with runtime monitoring of inter-app communication through those channels, which helps to prevent attacks. SEALANT's extensive evaluation demonstrates that (1) it detects and blocks inter-app attacks with high accuracy in a corpus of over 1,100 real-world apps, (2) it suffers from fewer false alarms than existing techniques in several representative scenarios, (3) its performance overhead is negligible, and (4) end-users do not find it challenging to adopt.
Application of CRISPR/Cas9 in plant biology
The CRISPR/Cas (clustered regularly interspaced short palindromic repeats/CRISPR-associated proteins) system was first identified in bacteria and archaea and can degrade exogenous substrates. It was developed as a gene editing technology in 2013. Over the subsequent years, it has received extensive attention owing to its easy manipulation, high efficiency, and wide application in gene mutation and transcriptional regulation in mammals and plants. The process of CRISPR/Cas is optimized constantly and its application has also expanded dramatically. Therefore, CRISPR/Cas is considered a revolutionary technology in plant biology. Here, we introduce the mechanism of the type II CRISPR/Cas called CRISPR/Cas9, update its recent advances in various applications in plants, and discuss its future prospects to provide an argument for its use in the study of medicinal plants.
Omnistereo: Panoramic Stereo Imaging
An OmniStereo panorama consists of a pair of panoramic images, where one panorama is for the left eye, and another panorama is for the right eye. The panoramic stereo pair provides a stereo sensation up to a full 360 degrees. Omnistereo panoramas cannot be photographed by two omnidirectional cameras from two viewpoints, but can be constructed by mosaicing together images from a rotating stereo pair. A more convenient approach to generate omnistereo panoramas is by mosaicing images from a single rotating camera. This approach also enables to control stereo disparity, giving a larger baselines for faraway scenes, and a smaller baseline for closer scenes. Parts of this work were supported by the Isreal Science Foundation grant 612, and by the Israeli Ministry of Science grant 2097/1/99. Authors e-mail: [email protected], [email protected], [email protected]. 1 Capturing panoramic omnistereo images with a rotating camera makes it impossible to capture dynamic scenes at video rates, and limits omnistereo imaging to stationary scenes. We therefore present two possibilities for capturing omnistereo panoramas using optics, without any moving parts. A special mirror is introduced such that viewing the scene through this mirror creates the same rays as those used with the rotating cameras. A lens for omnistereo panorama is also introduced. The designs of the mirror and of the lens are based on curves whose caustic is a circle. Omnistereo panoramas can also be rendered by computer graphics methods to represent virtual environments.
Brain functional activity during gait in normal subjects: a SPECT study
The purpose of this study was to evaluate changes in brain activity during voluntary walking in normal subjects using technetium-99m-hexamethyl-propyleneamine oxime single photon emission computed tomography. This study included 14 normal subjects. Statistical parametric mapping analysis revealed that the supplementary motor area, medial primary sensorimotor area, the striatum, the cerebellar vermis and the visual cortex were activated. These results suggested that the cerebral cortices controlling motor functions, visual cortex, basal ganglia and the cerebellum might be involved in the bipedal locomotor activities in humans.
Deathbed Confession: When a Dying Patient Confesses to Murder: Clinical, Ethical, and Legal Implications.
During an initial palliative care assessment, a dying man discloses that he had killed several people whilst a young man. The junior doctor, to whom he revealed his story, consulted with senior palliative care colleagues. It was agreed that legal advice would be sought on the issue of breaching the man's confidentiality. Two legal opinions conflicted with each other. A decision was made by the clinical team not to inform the police. In this article the junior doctor, the palliative medicine specialist, a medical ethicist, and a lawyer consider the case from their various perspectives.
Mobile Robot Localization and Mapping with Uncertainty using Scale-Invariant Visual Landmarks
A key component of a mobile robot system is the ability to localize itself accurately and, simultaneously, to build a map of the environment. Most of the existing algorithms are based on laser range finders, sonar sensors or artificial landmarks. In this paper, we describe a vision-based mobile robot localization and mapping algorithm, which uses scale-invariant image features as natural landmarks in unmodified environments. The invariance of these features to image translation, scaling and rotation makes them suitable landmarks for mobile robot localization and map building. With our Triclops stereo vision system, these landmarks are localized and robot ego-motion is estimated by least-squares minimization of the matched landmarks. Feature viewpoint variation and occlusion are taken into account by maintaining a view direction for each landmark. Experiments show that these visual landmarks are robustly matched, robot pose is estimated and a consistent three-dimensional map is built. As image features are not noise-free, we carry out error analysis for the landmark positions and the robot pose. We use Kalman filters to track these landmarks in a dynamic environment, resulting in a database map with landmark positional uncertainty. KEY WORDS—localization, mapping, visual landmarks, mobile robot
Provenance and scientific workflows: challenges and opportunities
Provenance in the context of workflows, both for the data they derive and for their specification, is an essential component to allow for result reproducibility, sharing, and knowledge re-use in the scientific community. Several workshops have been held on the topic, and it has been the focus of many research projects and prototype systems. This tutorial provides an overview of research issues in provenance for scientific workflows, with a focus on recent literature and technology in this area. It is aimed at a general database research audience and at people who work with scientific data and workflows. We will (1) provide a general overview of scientific workflows, (2) describe research on provenance for scientific workflows and show in detail how provenance is supported in existing systems; (3) discuss emerging applications that are enabled by provenance; and (4) outline open problems and new directions for database-related research.
Adherence and medication belief in patients with pulmonary arterial hypertension or chronic thromboembolic pulmonary hypertension: A nationwide population-based cohort survey.
BACKGROUND Pulmonary arterial hypertension (PAH) and chronic thromboembolic pulmonary hypertension (CTEPH) are rare diseases with a gradual decline in physical health. Adherence to treatment is crucial in these very symptomatic and life threatening diseases. OBJECTIVE To describe PAH and CTEPH patients experience of their self-reported medication adherence, beliefs about medicines and information about treatment. METHODS A quantitative, descriptive, national cohort survey that included adult patients from all PAH-centres in Sweden. All patients received questionnaires by mail: The Morisky Medication Adherence Scale (MMAS-8) assesses treatment-related attitudes and behaviour problems, the Beliefs about Medicines Questionnaire-Specific scale (BMQ-S) assesses the patient's perception of drug intake and the QLQ-INFO25 multi-item scale about medical treatment information. RESULTS The response rate was 74% (n = 325), mean age 66 ± 14 years, 58% were female and 69% were diagnosed with PAH and 31% with CTEPH. Time from diagnosis was 4.7 ± 4.2 years. More than half of the patients (57%) reported a high level of adherence. There was no difference in the patients' beliefs of the necessity of the medications to control their illness when comparing those with high, medium or low adherence. Despite high satisfaction with the information, concerns about potential adverse effects of taking the medication were significantly related to adherence. CONCLUSIONS Treatment adherence is relatively high but still needs improvement. The multi-disciplinary PAH team should, together with the patient, seek strategies to improve adherence and prevent concern.
Hierarchical Learning in Stochastic Domains: Preliminary Results
This paper presents the HDG learning algorithm, which uses a hierarchical decomposition of the state space to make learning to achieve goals more efficient with a small penalty in path quality. Special care must be taken when performing hierarchical planning and learning in stochastic domains, because macro-operators cannot be executed ballistically. The HDG algorithm, which is a descendent of Watkins’ Q-learning algorithm, is described here and preliminary empirical results are presented.
Perceived Neighborhood Social Cohesion and Condom Use Among Adolescents Vulnerable to HIV/STI
The relationship between neighborhood social dynamics and adolescent sexual behavior has not been well explored. We conducted a cross-sectional survey with 343 adolescents recruited from two health clinics in Baltimore. Multivariate logistic regression was utilized to assess the influence of perceived neighborhood social cohesion and collective monitoring of youth on condom use at last sex, controlling for family and individual factors. Condom use was significantly higher among participants who perceived their neighborhoods as high, 54.7%, versus low, 40.4%, in social cohesion. Neighborhood cohesion was significantly associated with condom use in multivariate analyses, as was parental communication, family structure, and gender. No association between perceived neighborhood collective monitoring of youth and condom use was found. We conclude that perceived neighborhood social cohesion is positively associated with condom use among adolescents vulnerable to HIV/STI and should be encouraged in the context of community-based prevention efforts.
Understanding Bland Altman analysis
In a contemporary clinical laboratory it is very common to have to assess the agreement between two quantitative methods of measurement. The correct statistical approach to assess this degree of agreement is not obvious. Correlation and regression studies are frequently proposed. However, correlation studies the relationship between one variable and another, not the differences, and it is not recommended as a method for assessing the comparability between methods.
 In 1983 Altman and Bland (B&A) proposed an alternative analysis, based on the quantification of the agreement between two quantitative measurements by studying the mean difference and constructing limits of agreement.
 The B&A plot analysis is a simple way to evaluate a bias between the mean differences, and to estimate an agreement interval, within which 95% of the differences of the second method, compared to the first one, fall. Data can be analyzed both as unit differences plot and as percentage differences plot.
 The B&A plot method only defines the intervals of agreements, it does not say whether those limits are acceptable or not. Acceptable limits must be defined a priori, based on clinical necessity, biological considerations or other goals.
 The aim of this article is to provide guidance on the use and interpretation of Bland Altman analysis in method comparison studies.
Antimicrobial activities of commercial essential oils and their components against food-borne pathogens and food spoilage bacteria
This study was undertaken to determine the in vitro antimicrobial activities of 15 commercial essential oils and their main components in order to pre-select candidates for potential application in highly perishable food preservation. The antibacterial effects against food-borne pathogenic bacteria (Listeria monocytogenes, Salmonella Typhimurium, and enterohemorrhagic Escherichia coli O157:H7) and food spoilage bacteria (Brochothrix thermosphacta and Pseudomonas fluorescens) were tested using paper disk diffusion method, followed by determination of minimum inhibitory (MIC) and bactericidal (MBC) concentrations. Most of the tested essential oils exhibited antimicrobial activity against all tested bacteria, except galangal oil. The essential oils of cinnamon, oregano, and thyme showed strong antimicrobial activities with MIC ≥ 0.125 μL/mL and MBC ≥ 0.25 μL/mL. Among tested bacteria, P. fluorescens was the most resistant to selected essential oils with MICs and MBCs of 1 μL/mL. The results suggest that the activity of the essential oils of cinnamon, oregano, thyme, and clove can be attributed to the existence mostly of cinnamaldehyde, carvacrol, thymol, and eugenol, which appear to possess similar activities against all the tested bacteria. These materials could be served as an important natural alternative to prevent bacterial growth in food products.
Mining Coherent Topics With Pre-Learned Interest Knowledge in Twitter
Discovering semantic coherent topics from the large amount of user-generated content (UGC) in social media would facilitate many downstream applications of intelligent computing. Topic models, as one of the most powerful algorithms, have been widely used to discover the latent semantic patterns in text collections. However, one key weakness of topic models is that they need documents with certain length to provide reliable statistics for generating coherent topics. In Twitter, the users’ tweets are mostly short and noisy. Observations of word co-occurrences are incomprehensible for topic models. To deal with this problem, previous work tried to incorporate prior knowledge to obtain better results. However, this strategy is not practical for the fast evolving UGC in Twitter. In this paper, we first cluster the users according to the retweet network, and the users’ interests are mined as the prior knowledge. Such data are then applied to improve the performance of topic learning. The potential cause for the effectiveness of this approach is that users in the same community usually share similar interests, which will result in less noisy sub-data sets. Our algorithm pre-learns two types of interest knowledge from the data set: the interest-word-sets and a tweet-interest preference matrix. Furthermore, a dedicated background model is introduced to judge whether a word is drawn from the background noise. Experiments on two real life twitter data sets show that our model achieves significant improvements over state-of-the-art baselines.
Unsupervised feature learning for 3D scene labeling
This paper presents an approach for labeling objects in 3D scenes. We introduce HMP3D, a hierarchical sparse coding technique for learning features from 3D point cloud data. HMP3D classifiers are trained using a synthetic dataset of virtual scenes generated using CAD models from an online database. Our scene labeling system combines features learned from raw RGB-D images and 3D point clouds directly, without any hand-designed features, to assign an object label to every 3D point in the scene. Experiments on the RGB-D Scenes Dataset v.2 demonstrate that the proposed approach can be used to label indoor scenes containing both small tabletop objects and large furniture pieces.
Local Convolutional Features with Unsupervised Training for Image Retrieval
Patch-level descriptors underlie several important computer vision tasks, such as stereo-matching or content-based image retrieval. We introduce a deep convolutional architecture that yields patch-level descriptors, as an alternative to the popular SIFT descriptor for image retrieval. The proposed family of descriptors, called Patch-CKN, adapt the recently introduced Convolutional Kernel Network (CKN), an unsupervised framework to learn convolutional architectures. We present a comparison framework to benchmark current deep convolutional approaches along with Patch-CKN for both patch and image retrieval, including our novel "RomePatches" dataset. Patch-CKN descriptors yield competitive results compared to supervised CNN alternatives on patch and image retrieval.
Movement Sonification: Effects on Motor Learning beyond Rhythmic Adjustments
Motor learning is based on motor perception and emergent perceptual-motor representations. A lot of behavioral research is related to single perceptual modalities but during last two decades the contribution of multimodal perception on motor behavior was discovered more and more. A growing number of studies indicates an enhanced impact of multimodal stimuli on motor perception, motor control and motor learning in terms of better precision and higher reliability of the related actions. Behavioral research is supported by neurophysiological data, revealing that multisensory integration supports motor control and learning. But the overwhelming part of both research lines is dedicated to basic research. Besides research in the domains of music, dance and motor rehabilitation, there is almost no evidence for enhanced effectiveness of multisensory information on learning of gross motor skills. To reduce this gap, movement sonification is used here in applied research on motor learning in sports. Based on the current knowledge on the multimodal organization of the perceptual system, we generate additional real-time movement information being suitable for integration with perceptual feedback streams of visual and proprioceptive modality. With ongoing training, synchronously processed auditory information should be initially integrated into the emerging internal models, enhancing the efficacy of motor learning. This is achieved by a direct mapping of kinematic and dynamic motion parameters to electronic sounds, resulting in continuous auditory and convergent audiovisual or audio-proprioceptive stimulus arrays. In sharp contrast to other approaches using acoustic information as error-feedback in motor learning settings, we try to generate additional movement information suitable for acceleration and enhancement of adequate sensorimotor representations and processible below the level of consciousness. In the experimental setting, participants were asked to learn a closed motor skill (technique acquisition of indoor rowing). One group was treated with visual information and two groups with audiovisual information (sonification vs. natural sounds). For all three groups learning became evident and remained stable. Participants treated with additional movement sonification showed better performance compared to both other groups. Results indicate that movement sonification enhances motor learning of a complex gross motor skill-even exceeding usually expected acoustic rhythmic effects on motor learning.
3D Ego-Pose Estimation via Imitation Learning
Ego-pose estimation, i.e., estimating a person’s 3D pose with a single wearable camera, has many potential applications in activity monitoring. For these applications, both accurate and physically plausible estimates are desired, with the latter often overlooked by existing work. Traditional computer vision-based approaches using temporal smoothing only take into account the kinematics of the motion without considering the physics that underlies the dynamics of motion, which leads to pose estimates that are physically invalid. Motivated by this, we propose a novel control-based approach to model human motion with physics simulation and use imitation learning to learn a videoconditioned control policy for ego-pose estimation. Our imitation learning framework allows us to perform domain adaption to transfer our policy trained on simulation data to real-world data. Our experiments with real egocentric videos show that our method can estimate both accurate and physically plausible 3D ego-pose sequences without observing the cameras wearer’s body.
A Technique for Data Deduplication using Q-Gram Concept with Support Vector Machine
Several systems that rely on consistent data to offer high quality services, such as digital libraries and e-commerce brokers, may be affected by the existence of duplicates, quasi-replicas, or near-duplicate entries in their repositories. Because of that, there have been significant investments from private and government organizations in developing methods for removing replicas from its data repositories.In this paper, we have proposed accordingly. In the previous work, duplicate record detection was done using three different similarity measures and neural network. In the previous work, we have generated feature vector based on similarity measures and then, neural network was used to find the duplicate records. In this paper, we have developed Q-gram concept with support vector machine for deduplication process. The similarity function, which we are used Dice coefficient,Damerau–Levenshtein distance,Tversky index for similarity measurement. Finally, support vector machine is used for testing whether data record is duplicate or not. A set of data generated from some similarity measures are used as the input to the proposed system. There are two processes which characterize the proposed deduplication technique, the training phase and the testing phase the experimental results showed that the proposed deduplication technique has higher accuracy than the existing method. The accuracy obtained for the proposed deduplication 88%.
Female orgasm(s): one, two, several.
INTRODUCTION There is general agreement that it is possible to have an orgasm thru the direct simulation of the external clitoris. In contrast, the possibility of achieving climax during penetration has been controversial. METHODS Six scientists with different experimental evidence debate the existence of the vaginally activated orgasm (VAO). MAIN OUTCOME MEASURE To give reader of The Journal of Sexual Medicine sufficient data to form her/his own opinion on an important topic of female sexuality. RESULTS Expert #1, the Controversy's section Editor, together with Expert #2, reviewed data from the literature demonstrating the anatomical possibility for the VAO. Expert #3 presents validating women's reports of pleasurable sexual responses and adaptive significance of the VAO. Echographic dynamic evidence induced Expert # 4 to describe one single orgasm, obtained from stimulation of either the external or internal clitoris, during penetration. Expert #5 reviewed his elegant experiments showing the uniquely different sensory responses to clitoral, vaginal, and cervical stimulation. Finally, the last Expert presented findings on the psychological scenario behind VAO. CONCLUSION The assumption that women may experience only the clitoral, external orgasm is not based on the best available scientific evidence.
Pharmacokinetic, pharmacodynamic, and tolerability profiles of the dipeptidyl peptidase-4 inhibitor linagliptin: a 4-week multicenter, randomized, double-blind, placebo-controlled phase IIa study in Japanese type 2 diabetes patients.
BACKGROUND The dipeptidyl-peptidase-4 (DPP-4) inhibitor linagliptin is under clinical development for treatment of type 2 diabetes mellitus (T2DM). In previous studies in white populations it showed potential as a once-daily oral antidiabetic drug. OBJECTIVES In compliance with regulatory requirements for new drugs intended for use in the Japanese population, this study investigated the pharmacokinetics, pharmacodynamics, and tolerability of multiple oral doses of linagliptin in Japanese patients with T2DM. METHODS In this randomized, double-blind, placebo-controlled multiple dose study, 72 Japanese patients with T2DM were assigned to receive oral doses of linagliptin 0.5, 2.5, or 10 mg or placebo (1:1:1:1 ratio) once daily for 28 days. For analysis of pharmacokinetic properties, linagliptin concentrations were determined from plasma and urinary samples obtained throughout the treatment phase, with more intensive samplings on days 1 and 28. DPP-4 inhibition, glycosylated hemoglobin A1c (HbA(1c)) levels, and plasma glucose and glucagon-like peptide-1 (GLP-1) levels were compared by mixed effect model. Tolerability was assessed throughout the study by physical examination, including blood pressure and pulse rate measurements, 12-lead ECG, and laboratory analysis. RESULTS Baseline demographic characteristics were well balanced across the 4 treatment groups (mean [SD] age, 59.7 [6.4] years in the placebo group, 60.8 [9.2] years in the 0.5 mg group, 60.2 [6.4] years in the 2.5 mg group, and 59.1 [8.6] years in the 10 mg group; mean [SD] weight, 67.2 [10.0] kg in the placebo group, 64.5 [9.0] kg in the 0.5 mg group, 69.6 [9.4] kg in the 2.5 mg group, and 63.5 [12.2] kg in the 10 mg group; mean [SD] duration of T2DM diagnosis, 5.1 [4.2] years in the placebo group, 5.2 [4.7] years in the 0.5 mg group, 5.9 [4.8] years in the 2.5 mg group, and 2.6 [2.3] years in the 10 mg group). The majority of the patients treated were male (76.4%). Use of previous antidiabetic medication was more common in the 2.5 mg linagliptin group (44%) than in the 0.5 or 10 mg linagliptin (15.8% and 22.2%, respectively) or placebo groups (35.3%). Total systemic exposure in terms of linagliptin AUC and C(max) (which occurred at 1.25-1.5 hours) increased in a less than dose-proportional manner. The terminal half-life was long (223-260 hours) but did not reflect the accumulation half-life (10.0-38.5 hours), resulting in a moderate accumulation ratio of <2.9 that decreased with increasing dose. Urinary excretion increased with linagliptin doses but was <7% at steady state for all dose groups. Inhibition of plasma DPP-4 at 24 hours after the last dose on day 28 was approximately 45.8%, 77.8%, and 89.7% after linagliptin 0.5, 2.5, and 10 mg, respectively. At steady state, linagliptin was associated with dose-dependent increases in plasma GLP-1 levels, and the postprandial GLP-1 response was enhanced. Statistically significant dose-dependent reductions were observed in fasting plasma glucose levels at day 29 for all linagliptin groups (-11.5, -13.6, and -25.0 mg/dL for the 0.5, 2.5, and 10 mg groups, respectively; P < 0.05 for all linagliptin groups). Linagliptin also produced statistically significant dose-dependent reductions from baseline for glucose area under the effect curve over 3 hours after meal tolerance tests (-29.0 to -68.1 mg × h/dL; P < 0.05 for all 3 linagliptin groups). For the 0.5 and 10 mg linagliptin-treated groups, there were statistically significant reductions in HbA(1c) from baseline compared with placebo, despite the relatively low baseline HbA(1c) (7.2%) and small sample size (P < 0.01 for both groups). The greatest reduction in HbA(1c) (-0.44%) was seen in the highest linagliptin dose group (10 mg). On dosing for up to 28 days, linagliptin was well tolerated with no reported serious adverse events or symptoms suggestive of hypoglycemia. Overall, fewer adverse events were reported by patients after linagliptin than after placebo (11 of 55 [20%] vs 6 of 17 [35%]). CONCLUSIONS Linagliptin demonstrated a nonlinear pharmacokinetic profile in these Japanese patients with T2DM consistent with the findings of previous studies in healthy Japanese and white patients. Linagliptin treatment resulted in statistically significant and clinically relevant reductions in HbA(1c) as soon as 4 weeks after starting therapy in these Japanese patients with T2DM, suggesting that clinical studies of longer duration in Japanese T2DM patients are warranted.
A Review of Wearable Sensor Systems for Monitoring Body Movements of Neonates
Characteristics of physical movements are indicative of infants' neuro-motor development and brain dysfunction. For instance, infant seizure, a clinical signal of brain dysfunction, could be identified and predicted by monitoring its physical movements. With the advance of wearable sensor technology, including the miniaturization of sensors, and the increasing broad application of micro- and nanotechnology, and smart fabrics in wearable sensor systems, it is now possible to collect, store, and process multimodal signal data of infant movements in a more efficient, more comfortable, and non-intrusive way. This review aims to depict the state-of-the-art of wearable sensor systems for infant movement monitoring. We also discuss its clinical significance and the aspect of system design.
A Compact Monopole Antenna for Super Wideband Applications
A planar microstrip-fed super wideband monopole antenna is initially proposed. By embedding a semielliptically fractal-complementary slot into the asymmetrical ground plane, a 10-dB bandwidth of 172% (1.44-18.8 GHz) is achieved with ratio bandwidth >;12:1. Furthermore, the proposed antenna also demonstrated a wide 14-dB bandwidth from 5.4 to 12.5 GHz, which is suitable for UWB outdoor propagation. This proposed antenna is able to cover the DVB-H in L-band (for PMP), DCS, PCS, UMTS, Bluetooth, WiMAX2500, LTE2600, and UWB bands.
Robust control of uncertain systems: Classical results and recent developments
This paper presents a survey of themost significant results on robust control theory. In particular,we study the modeling of uncertain systems, robust stability analysis for systems with unstructured uncertainty, robustness analysis for systems with structured uncertainty, and robust control system design including H∞ controlmethods. The paper also presents somemore recent results on deterministic and probabilistic methods for systems with uncertainty. © 2014 Published by Elsevier Ltd.
Evaluation of the DSC algorithm and the BSS color scheme in dense cellular-like IEEE 802.11ax deployments
Coping with the extreme growth of the number of users is one of the main challenges for the future IEEE 802.11 networks. The high interference level, along with the conventional standardized carrier sensing approaches, will degrade the network performance. To tackle these challenges, the Dynamic Sensitivity Control (DSC) and the BSS Color scheme are considered in IEEE 802.11ax and IEEE 802.11ah, respectively. The main purpose of these schemes is to enhance the network throughput and improve the spectrum efficiency in dense networks. In this paper, we evaluate the DSC and the BSS Color scheme along with the PARTIAL-AID (PAID) feature introduced in IEEE 802.11ac, in terms of throughput and fairness. We also, exploit the performance when the aforementioned techniques are combined. The simulations show a significant gain in total throughput when these techniques are applied.
Search Engines - Information Retrieval in Practice
This is the eBook of the printed book and may not include any media, website access codes, or print supplements that may come packaged with the bound book. Search Engines: Information Retrieval in Practice is ideal for introductory information retrieval courses at the undergraduate and graduate level in computer science, information science and computer engineering departments. It is also a valuable tool for search engine and information retrieval professionals. В Written by a leader in the field of information retrieval, Search Engines: Information Retrieval in Practice , is designed to give undergraduate students the understanding and tools they need to evaluate, compare and modify search engines.В Coverage of the underlying IR and mathematical models reinforce key concepts. The bookвЂTMs numerous programming exercises make extensive use of Galago, a Java-based open source search engine.
Flat Minima
We present a new algorithm for finding low-complexity neural networks with high generalization capability. The algorithm searches for a flat minimum of the error function. A flat minimum is a large connected region in weight space where the error remains approximately constant. An MDL-based, Bayesian argument suggests that flat minima correspond to simple networks and low expected overfitting. The argument is based on a Gibbs algorithm variant and a novel way of splitting generalization error into underfitting and overfitting error. Unlike many previous approaches, ours does not require gaussian assumptions and does not depend on a good weight prior. Instead we have a prior over input output functions, thus taking into account net architecture and training set. Although our algorithm requires the computation of second-order derivatives, it has backpropagation's order of complexity. Automatically, it effectively prunes units, weights, and input lines. Various experiments with feedforward and recurrent nets are described. In an application to stock market prediction, flat minimum search outperforms conventional backprop, weight decay, and optimal brain surgeon/optimal brain damage.
Examining the Impact of Feature Selection Methods on Text Classification
Feature selection that aims to determine and select the distinctive terms representing a best document is one of the most important steps of classification. With the feature selection, dimension of document vectors are reduced and consequently duration of the process is shortened. In this study, feature selection methods were studied in terms of dimension reduction rates, classification success rates, and dimension reductionclassification success relation. As classifiers, kNN (k-Nearest Neighbors) and SVM (Support Vector Machines) were used. 5 standard (Odds Ratio-OR, Mutual Information-MI, Information Gain-IG, Chi-Square-CHI and Document Frequency-DF), 2 combined (Union of Feature Selections-UFS and Correlation of Union of Feature Selections-CUFS) and 1 new (Sum of Term Frequency-STF) feature selection methods were tested. The application was performed by selecting 100 to 1000 terms (with an increment of 100 terms) from each class. It was seen that kNN produces much better results than SVM. STF was found out to be the most successful feature selection considering the average values in both datasets. It was also found out that CUFS, a combined model, is the one that reduces the dimension the most, accordingly, it was seen that CUFS classify the documents more successfully with less terms and in short period compared to many of the standard methods. Keywords—Feature selection; text classification; text mining; k-Nearest Neighbors; support vector machines
Enhanced electric conductivity at ferroelectric vortex cores in BiFeO3
Topological defects in ferroic materials are attracting much attention both as a playground of unique physical phenomena and for potential applications in reconfigurable electronic devices. Here, we explore electronic transport at artificially created ferroelectric vortices in BiFeO3 thin films. The creation of one-dimensional conductive channels activated at voltages as low as 1 V is demonstrated. We study the electronic as well as the static and dynamic polarization structure of several topological defects using a combination of first-principles and phase-field modelling. The modelling predicts that the core structure can undergo a reversible transformation into a metastable twist structure, extending charged domain walls segments through the film thickness. The vortex core is therefore a dynamic conductor controlled by the coupled response of polarization and electron–mobile-vacancy subsystems with external bias. This controlled creation of conductive one-dimensional channels suggests a pathway for the design and implementation of integrated oxide electronic devices based on domain patterning.
Die operative Versorgung der sekundären Coxarthrose bei kongenitaler Hüftluxation (Crowe Typ IV)
Ziel der Therapie ist die mechanische und funktionelle Stabilisation einer hohen Hüftluxation bei dysplasiebedingter Coxarthrose mittels endoprothetischem Ersatz. Hüftdysplasie im Erwachsenenalter. Fortgeschrittene symptomatische Coxarthrose. Hohe Hüftluxation gemäß Crowe III/IV. Symptomatische Beinlängendifferenz. Zerebrospinale Dysfunktionen. Muskeldystrophien. Manifeste Störung des Knochenmetabolismus. Infektion. Immunsupprimierte Patienten. Seitenlage (variabel). Gerade laterale Inzision über dem Trochanter major und Eingehen zwischen dem Vorderrand des M. gluteus maximus und dem Hinterrand des M. gluteus medius, dem sog. Gibson-Intervall. Darstellen des N. ischiadicus und permanente Kontrolle desselben, um hier bei jeglichen Manipulationen, vor allem bei der Reposition des Femurs, einen Traktionsschaden aufgrund erhöhter Spannung zu vermeiden. Trochanter-Flip-Osteotomie und Darstellen der Gelenkkapsel zwischen M. piriformis und M. gluteus minimus. Aufgrund des Hochstands des Femurs kann hier die Identifikation der anatomischen Landmarken erschwert sein. Schrittweise Z-förmige Kapsulektomie der elongierten Kapsel und Darstellen der Primärpfanne, die häufig aufgrund der langzeitigen Luxation des Femurkopfs nur rudimentär ausgebildet ist. Lokalisation des Pfannengrunds bzw. der Tränenfigur. Aufraspeln der Pfanne und Implantation der mit der Pfannengröße konformen Hakendachschale. Fixation der Pfanne je nach Halt und Knochenqualität mit 3–5 Schrauben. Resektion des proximalen Femurs je nach Spannung der Weichteile bis auf Höhe oder sogar unter das Niveau des Trochanter minor. Anschließend Aufraspeln des Femurs und unter steter Spannungskontrolle des N. ischiadicus Reposition des Femurs in die Hüftpfanne; je nach Spannung des N. ischiadicus und der umgebenden Weichteile erfolgt eine Nachresektion des proximalen Femurs. Anfrischen des proximalen Femurs mit dem Meißel zur besseren Konsolidation des Trochanter-major-Fragments. Refixation des Trochanters mittels Cerclage, wobei die Muskulatur des M. gluteus medius an seinem Ursprung am Os ilium oder am muskulotendinösen Übergang am Trochanterfragment etwas mobilisiert werden muss, um den Trochanter major wieder distal am proximalen Femur refixieren zu können. Während der Hospitalisierung regelmäßige Behandlung auf der passiven Bewegungsschiene mit maximal 70° Flexion. Keine aktive Abduktion, keine passive Adduktion über die Mittellinie, kein Heben des gestreckten Beins, 10–15 kg Teilbelastung an zwei Unterarmgehstöcken während 8 Wochen. Je nach Compliance des Patienten und intraoperativer Stabilität der Prothese trägt der Patient eine Antiluxationsbandage (in Bern Hohmann-Bandage) für die ersten 6–8 Wochen postoperativ. Anschließend erste klinische und radiologische Nachkontrolle und je nach Konsolidation des Trochanter major schrittweiser Übergang zur Vollbelastung. Thromboseprophylaxe bis zur Vollbelastung. Im beobachteten Zeitraum von 1998–2012 erfolgten insgesamt 28 Hüftprothesenimplantationen bei hoher Hüftluxation im Erwachsenenalter im Stadium Crowe IV, die mit einer Hakendachschale und einer femoralen Verkürzungsosteotomie behandelt wurden. Aktuell liegen von 14 Patienten nach 8 ± 1,2 Jahren (3,5–12 Jahren) klinische Nachuntersuchungen vor. In diesem mittleren Beobachtungszeitraum zeigten sich gemäß Merle-d’Aubigné-Score bei 86 % der operierten Patienten Verbesserungen. Gute bis exzellente Resultate wurden in 71 % der Fälle erreicht. Langzeitergebnisse (< 10 Jahre) stehen bis dato noch aus. The aim of the therapy is mechanical and functional stabilization of high dislocated hips with dysplasia coxarthrosis using total hip arthroplasty (THA). Developmental dysplasia of the hip (DDH) in adults, symptomatic dysplasia coxarthrosis, high hip dislocation according to Crowe type III/IV, and symptomatic leg length inequality. Cerebrospinal dysfunction, muscular dystrophy, apparent disturbance of bone metabolism, acute or chronic infections, and immunocompromised patients. With the patient in a lateral decubitus position an incision is made between the anterior border of the gluteus maximus muscle and the posterior border of the gluteus medius muscle (Gibson interval). Identification of the sciatic nerve to protect the nerve from traction disorders by visual control. After performing trochanter flip osteotomy, preparation of the true actetabulum if possible. Implantation of the reinforcement ring, preparation of the femur and if necessary for mobilization, resection until the trochanter minor. Test repositioning under control of the sciatic nerve. Finally, refixation of the trochanteric crest. During hospital stay, intensive mobilization of the hip joint using a continuous passive motion machine with maximum flexion of 70°. No active abduction and passive adduction over the body midline. Maximum weight bearing 10–15 kg for 8 weeks, subsequently, first clinical and radiographic follow-up and deep venous thrombosis prophylaxis until full weight bearing. From 1995 to 2012, 28 THAs of a Crow type IV high hip-dislocation were performed in our institute. Until now 14 patients have been analyzed during a follow-up of 8 years in 2012. Mid-term results showed an improvement of the postoperative clinical score (Merle d’Aubigné score) in 86 % of patients. Good to excellent results were obtained in 79 % of cases. Long-term results are not yet available. In one case an iatrogenic neuropraxia of the sciatic nerve was observed and after trauma a redislocation of the arthroplasty appeared in another case. In 2 cases an infection of the THA appeared 8 and 15 months after index surgery. No pseudoarthrosis of the trochanter or aseptic loosening was noticed
Robust odometry estimation for RGB-D cameras
The goal of our work is to provide a fast and accurate method to estimate the camera motion from RGB-D images. Our approach registers two consecutive RGB-D frames directly upon each other by minimizing the photometric error. We estimate the camera motion using non-linear minimization in combination with a coarse-to-fine scheme. To allow for noise and outliers in the image data, we propose to use a robust error function that reduces the influence of large residuals. Furthermore, our formulation allows for the inclusion of a motion model which can be based on prior knowledge, temporal filtering, or additional sensors like an IMU. Our method is attractive for robots with limited computational resources as it runs in real-time on a single CPU core and has a small, constant memory footprint. In an extensive set of experiments carried out both on a benchmark dataset and synthetic data, we demonstrate that our approach is more accurate and robust than previous methods. We provide our software under an open source license.
Negative refractive index in left-handed materials.
The real part of the refractive index n(omega) of a nearly transparent and passive medium is usually taken to have only positive values. Through an analysis of a current source radiating into a 1D "left-handed" material (LHM)-where the permittivity and permeability are simultaneously less than zero-we determine the analytic structure of n(omega), demonstrating frequency regions where the sign of Re[n(omega)] must, in fact, be negative. The regime of negative index, made relevant by a recent demonstration of an effective LHM, leads to unusual electromagnetic wave propagation and merits further exploration.
Performance Analysis of Flooding Attack Prevention Algorithm in MANETs
The lack of any centralized infrastructure in mobile ad hoc networks (MANET) is one of the greatest security concerns in the deployment of wireless networks. Thus communication in MANET functions properly only if the participating nodes cooperate in routing without any malicious intention. However, some of the nodes may be malicious in their behavior, by indulging in flooding attacks on their neighbors. Some others may act malicious by launching active security attacks like denial of service. This paper addresses few related works done on trust evaluation and establishment in ad hoc networks. Related works on flooding attack prevention are reviewed. A new trust approach based on the extent of friendship between the nodes is proposed which makes the nodes to co-operate and prevent flooding attacks in an ad hoc environment. The performance of the trust algorithm is tested in an ad hoc network implementing the Ad hoc On-demand Distance Vector (AODV) protocol. Keywords— AODV, Flooding, MANETs, trust estimation
The MRC spine stabilization trial: surgical methods, outcomes, costs, and complications of surgical stabilization.
STUDY DESIGN A review of the surgical costs and results in a group of patients randomly allocated to surgery as part of a large prospective randomized trial of patients with chronic back pain. OBJECTIVE To report the observational data from the surgical arm of a randomized trial comparing surgery with intensive rehabilitation for chronic low back pain. Clinical and economic data are reported. SUMMARY OF BACKGROUND DATA Surgery for chronic low back pain is a well established but unproven intervention. The most cost-effective technique for spinal stabilization is still not established. METHODS One hundred six patients with chronic low back pain were randomized to the surgical group of a randomized trial comparing spinal fusion of the lumbar with a 3 week intensive rehabilitation program. The primary outcomes were the Oswestry Disability Index (ODI) and the Shuttle Walking Test measured at baseline and 2 years postrandomization. Patients were stratified by preoperative diagnosis, smoking habit, and litigation. Complications were assessed and costs analyzed. RESULTS Of the 176 surgical patients, 56 underwent postero-lateral fusion, 57 underwent interbody fusion, and 24 underwent flexible stabilization of the spine. The mean ODI for all patients in the surgical arm of the trial improved from a baseline of 46.5 (SD 14.6) to 34.2 (SD 21) at 2 years. Health care costs were higher ( 3109 pounds difference) for more complex procedures, and nearly 6 times as many early complications occurred with the more complex procedures. Smoking and unemployment were associated with worse results whereas litigation did not adversely affect the outcome. CONCLUSION These observational changes in the ODI after surgery are similar to those reported from other studies of spinal fusion. More complex surgery is more expensive with more complications than postero-lateral fusion.
The antitumor potential of Interleukin-27 in prostate cancer
Prostate cancer (PCa) is of increasing significance worldwide as a consequence of the population ageing. Fragile elderly patients may particularly benefit from noninvasive and well tolerable immunotherapeutic approaches. Preclinical studies have revealed that the immune-regulatory cytokine IL-27 may exert anti-tumor activities in a variety of tumor types without discernable toxicity. We, thus, investigated whether IL-27 may function as anti-tumor agent in human (h) PCa and analyzed the rationale for its clinical application. In vitro, IL-27 treatment significantly inhibited proliferation and reduced the angiogenic potential of hPCa cells by down-regulating the pro-angiogenesis-related genes fms-related tyrosine kinase (FLT)1, prostaglandin G/H synthase 1/cyclooxygenase-1 (PTGS1/COX-1) and fibroblast growth factor receptor (FGFR)3. In addition, IL-27 up-regulated the anti-angiogenesis-related genes such as CXCL10 and TIMP metallopeptidase inhibitor 3 (TIMP3). In vivo, IL-27 reduced proliferation and vascularization in association with ischemic necrosis of tumors developed after PC3 or DU145 cell injection in athymic nude mice. In patients' prostate tissues, IL-27R was expressed by normal epithelia and low grade PCa and lost by high tumor grade and stages. Nevertheless, IL-27R was expressed by CD11c(+), CD4(+) and CD8(+) leukocytes infiltrating the tumor and draining lymph nodes. These data lead to the conclusion that i) IL-27's anti-PCa potential may be fully exploited in patients with well-differentiated, localized IL-27R positive PCa, since in this case it may act on both cancerous epithelia and the tumor microenvironment; ii) PCa patients bearing high grade and stage tumor that lack IL-27R may benefit, however, from IL-27's immune-stimulatory properties.
A Wide-tuning-range VCO with Small VCO-gain Fluctuation for Multi-band W-CDMA RFIC
A novel wide-tuning-range LC-tuned voltage-controlled oscillator (LC-VCO) - featuring small VCO-gain (KVCO) fluctuation - was developed. For small KVCO fluctuation, a serial LC-resonator that consists of an inductor, a fine-tuning varactor, and a capacitor bank was added to a conventional parallel LC-resonator that uses a capacitor bank scheme. The resonator was applied to a 3.9-GHz VCO for multi-band W-CDMA RFIC fabricated with 0.25-mum Si-BiCMOS technology. The VCO exhibited KVCO fluctuation of only 21%, which is one third that of a conventional VCO, with 34% tuning range. The VCO also exhibited a low phase noise of -121 dBc/Hz at 1-MHz offset frequency and a low current consumption of 4.0 mA
Arthroscopic rotator cuff repair: prospective evaluation with sequential ultrasonography.
BACKGROUND Recent studies have demonstrated predictable healing after arthroscopic rotator cuff repair at a single time point, but few studies have evaluated tendon healing over time. HYPOTHESIS Rotator cuff tears that are intact on ultrasound at 1 time point will remain intact, and clinical results will improve regardless of healing status. STUDY DESIGN Cohort study; Level of evidence, 3. METHODS The Arthroscopic Rotator Cuff Registry was established to determine the effectiveness of arthroscopic rotator cuff repair with clinical outcomes using the American Shoulder and Elbow Surgeons score and ultrasound at 1 and 2 years, postoperatively. Patients were assigned to 1 of 3 groups based on ultrasound appearance: group 1, rotator cuff tendon intact at 1 and 2 years (n = 63); group 2, rotator cuff tendon defect at 1 and 2 years (n = 23); group 3, rotator cuff tendon defect at 1 year but no defect at 2 years (n = 7). RESULTS The ultrasound appearance was consistent at 1 and 2 years for 86 of the 93 patients (92.5%). The patients in group 1 had a significantly lower mean age (57.8 +/- 9.8 years) than the patients of group 2 (63.6 +/- 8.6 years; P = .04). Group 2 had a significantly greater rotator cuff tear size (4.36 +/- 1.6 cm) than group 1 (2.84 +/- 1.1 cm; P = .00025). Each group had a significant improvement in American Shoulder and Elbow Surgeons scores from baseline to 2-year follow-up. CONCLUSION All intact rotator cuff tendons at 1 year remained intact at 2 years. A small group of patients with postoperative imaging did not appear healed by ultrasound at 1 year but did so at 2 years. Patients demonstrated improvement in American Shoulder and Elbow Surgeons shoulder scores, range of motion, and strength, regardless of tendon healing status on ultrasound.
Supply Chain Management and Retailing
Retailers are now the dominant partners in most suply systems and have used their positions to re-engineer operations and partnership s with suppliers and other logistic service providers. No longer are retailers the pass ive recipients of manufacturer allocations, but instead are the active channel con trollers organizing supply in anticipation of, and reaction to consumer demand. T his paper reflects on the ongoing transformation of retail supply chains and logistics. If considers this transformation through an examination of the fashion, grocery and selected other retail supply chains, drawing on practical illustrations. Current and fut ure challenges are then discussed. Introduction Retailers were once the passive recipients of produ cts allocated to stores by manufacturers in the hope of purchase by consumers and replenished o nly at the whim and timing of the manufacturer. Today, retailers are the controllers of product supply in anticipation of, and reaction to, researched, understood, and real-time customer demand. Retailers now control, organise, and manage the supply chain from producti on to consumption. This is the essence of the retail logistics and supply chain transforma tion that has taken place since the latter part of the twentieth century. Retailers have become the channel captains and set the pace in logistics. Having extended their channel control and focused on corporate effi ci ncy and effectiveness, retailers have
Detecting Faces Using Region-based Fully Convolutional Networks
Face detection has achieved great success using the region-based methods. In this report, we propose a region-based face detector applying deep networks in a fully convolutional fashion, named Face R-FCN. Based on Region-based Fully Convolutional Networks (R-FCN), our face detector is more accurate and computationally efficient compared with the previous R-CNN based face detectors. In our approach, we adopt the fully convolutional Residual Network (ResNet) as the backbone network. Particularly, we exploit several new techniques including position-sensitive average pooling, multi-scale training and testing and on-line hard example mining strategy to improve the detection accuracy. Over two most popular and challenging face detection benchmarks, FDDB and WIDER FACE, Face R-FCN achieves superior performance over state-of-the-arts.
Central vertigo and dizziness: epidemiology, differential diagnosis, and common causes.
BACKGROUND Dizziness is a common complaint among patients seen by primary care physicians, neurologists, and otolaryngologists. The most common causes of dizziness are peripheral vestibular disorders, but central nervous system disorders must be excluded. This article provides an overview of the epidemiology of dizziness, differentiating between central and peripheral vertigo, and central causes of dizziness. REVIEW SUMMARY Dizziness is among the most common complaints in medicine, affecting approximately 20% to 30% of persons in the general population. Dizziness is a general term for a sense of disequilibrium. Vertigo is a subtype of dizziness, defined as an illusion of movement caused by asymmetric involvement of the vestibular system. Central vestibular lesions affecting the pons, medulla, or cerebellum cause vertigo, nausea, vomiting, severe ataxia, multidirectional nystagmus that is not suppressed by optic fixation, and other neurologic signs. The other types of dizziness are dysequilibrium without vertigo, presyncope, and psychophysiologic dizziness, which is often associated with anxiety, depression, and panic disorder. CONCLUSIONS Epidemiologic studies indicate that central causes are responsible for almost one-fourth of the dizziness experience by patients. The patient's history, neurologic examination, and imaging studies are usually the key to differentiation of peripheral and central causes of vertigo. The most common central causes of dizziness and vertigo are cerebrovascular disorders related to the vertebrobasilar circulation, migraine, multiple sclerosis, tumors of the posterior fossa, neurodegenerative disorders, some drugs, and psychiatric disorders.
The labor economics of paid crowdsourcing
We present a model of workers supplying labor to paid crowdsourcing projects. We also introduce a novel method for estimating a worker's reservation wage - the key parameter in our labor supply model. We tested our model by presenting experimental subjects with real-effort work scenarios that varied in the offered payment and difficulty. As predicted, subjects worked less when the pay was lower. However, they did not work less when the task was more time-consuming. Interestingly, at least some subjects appear to be "target earners," contrary to the assumptions of the rational model. The strongest evidence for target earning is an observed preference for earning total amounts evenly divisible by 5, presumably because these amounts make good targets. Despite its predictive failures, we calibrate our model with data pooled from both experiments. We find that the reservation wages of our sample are approximately log normally distributed, with a median wage of $1.38/hour. We discuss how to use our calibrated model in applications.
Using machine learning techniques for grapheme to phoneme transcription
The renewed interest in grapheme to phoneme conversion (G2P), due to the need of developing multilingual speech synthesizers and recognizers, suggests new approaches more efficient than the traditional rule&exception ones. A number of studies have been performed to investigate the possible use of machine learning techniques to extract phonetic knowledge in a automatic way starting from a lexicon. In this paper, we present the results of our experiments in this research field. Starting from the state of art, our contribution is in the development of a language-independent learning scheme for G2P based on Classification and Regression Trees (CART). To validate our approach, we realized G2P converters for the following languages: British English, American English, French and Brazilian Portuguese.
Automated Optic Disc Detection in Retinal Images of Patients with Diabetic Retinopathy and Risk of Macular Edema
In this paper, a new automated methodology to detect the optic disc (OD) automatically in retinal images from patients with risk of being affected by Diabetic Retinopathy (DR) and Macular Edema (ME) is presented. The detection procedure comprises two independent methodologies. On one hand, a location methodology obtains a pixel that belongs to the OD using image contrast analysis and structure filtering techniques and, on the other hand, a boundary segmentation methodology estimates a circular approximation of the OD boundary by applying mathematical morphology, edge detection techniques and the Circular Hough Transform. The methodologies were tested on a set of 1200 images composed of 229 retinographies from patients affected by DR with risk of ME, 431 with DR and no risk of ME and 540 images of healthy retinas. The location methodology obtained 98.83% success rate, whereas the OD boundary segmentation methodology obtained good circular OD boundary approximation in 94.58% of cases. The average computational time measured over the total set was 1.67 seconds for OD location and 5.78 seconds for OD boundary segmentation. Keywords—Diabetic retinopathy, macular edema, optic disc, automated detection, automated segmentation.
Context-Aware Mobile Learning
Recent developments on mobile devices and networks enable new opportunities for mobile learning anywhere, anytime. Furthermore, recent advances on adaptive learning establish the foundations for personalized learning adapted to the characteristics of each individual learner. A mobile learner would perform an educational activity using the infrastructure (e.g. handheld devices, networks) in an environment (e.g. outdoors). In order to provide personalization, an adaptation engine adapts the educational activity and the infrastructure according to the context. The context is described by the learner’s state, the educational activity’s state, the infrastructure’s state, and the environment’s state. Furthermore, each one of these states is described by its dimensions. Many examples illustrate the adaptation decisions.
Two-year follow-up after percutaneous coronary intervention with titanium-nitride-oxide-coated stents versus paclitaxel-eluting stents in acute myocardial infarction.
BACKGROUND AND AIMS The aim of this study was to evaluate the long-term effects of the titanium-nitride-oxide-coated (TITANOX) stent and the paclitaxel-eluting stent (PES) in patients who had undergone a percutaneous coronary intervention for acute myocardial infarction (MI). METHODS AND RESULTS The TITAX-AMI trial randomly assigned 425 patients with MI to receive either a TITANOX stent or a PES. The primary end-point was a composite of MI, target lesion revascularization, or death from cardiac causes. At 12 months, there was no significant difference between patients receiving TITANOX stent or PES in the rate of primary end-point (10.3% versus 12.8%, P=0.5). After 2 years of follow-up, a significantly lower rate of primary end-point was observed in the TITANOX stent group compared with the PES group (11.2% versus 21.8%, HR 2.2, 95% confidence interval (CI) 1.3-3.8, P=0.004). This difference was driven by a reduced rate of MI (5.1% versus 15.6%, P<0.001) and cardiac death (0.9% versus 4.7%, P=0.02) in favour of the TITANOX stent. Definite stent thrombosis occurred in 0.5% and 6.2% of the patients (P=0.001), respectively. CONCLUSIONS The implantation of a TITANOX stent resulted in better clinical outcome compared with a PES during 2 years of follow-up among patients treated for acute MI.
The Effects of Violent Video Game Habits on Adolescent Aggressive Attitudes and Behaviors
Video games have become one of the favorite activities of children in America. A growing body of research links violent video game play to aggressive cognitions, attitudes, and behaviors. This study tested the predictions that exposure to violent video game content is (1) positively correlated with hostile attribution bias, (2) positively correlated with arguments with teachers and physical fights, and negatively correlated with school performance, and (3) positively correlated with hostility. 607 8and 9-grade students from four schools participated. Each prediction was supported. Youth who expose themselves to greater amounts of video game violence see the world as a more hostile place, are more hostile themselves, get into arguments with teachers more frequently, are more likely to be involved in physical fights, and perform more poorly in school. Video game violence exposure is a significant predictor of physical fights even when respondent sex, hostility level, and weekly amount of game play are statistically controlled. It is suggested that video game violence is a risk factor for aggressive behavior. The results also suggest that parental involvement in video game play may act as a protective factor for youth. Results are interpreted within and support the framework of the General Aggression Model. 1 Address correspondence to: Douglas A. Gentile, Ph.D., National Institute on Media and the Family, 606 24th Avenue South, Suite 606, Minneapolis, MN 55454. Phone: 612/672-5437; Fax: 612/672-4113; E-mail: [email protected] Violent Video Games 2 The Popularity of Video Games Video games have become one of the favorite activities of children in America (Dewitt,1994). Sales have grown consistently with the entire electronic entertainment category taking in between $7 billion and $7.5 billion in 1999, surpassing theatrical box office revenues for the first time (“Come in and Play,” 2000). Worldwide video game sales are now at $20 billion (Cohen, 2000). Over 100 million Gameboys and 75 million PlayStations have been sold (Kent, 2000). The average American child between the ages of 2 and 17 plays video games for 7 hours a week (Gentile & Walsh, under review). A study by Buchman and Funk (1996) highlighted the differences between boys and girls, reporting that fourth through eighth grade boys played video games for 5 to 10 hours a week while girls played for 3 to 6 hours a week. Using industry polls, Provenzo (1991) studied the most popular Nintendo video games in America and found that 40 of the 47 had violence as their main theme. In another study (Buchman & Funk, 1996) in which video games were split into six categories, human and fantasy violence accounted for about 50% of children’s favorite games, with sports violence contributing another 1620% for boys and 6-15% for girls. Research On Video Games and Aggression Many observant parents agree that the effects of violent video games are probably deleterious to children; however, they generally believe that their own children will be unaffected. This may just be bias on their part, or they may be correct. Research has shown that not all children are affected in the same way by violent video games (Anderson & Dill, 2000; Lynch, 1994; Lynch, 1999). While the literature connecting video game violence and aggression is growing, much of the research that has been done on video games to date has not taken into consideration the effect of pre-existing hostility or aggression. Several correlational studies (e.g., Anderson & Dill, 2000; Colwell & Payne, 2000; Dominick, 1984; Lin & Lepper, 1987; Fling, Smith, Rodriguez, Thornton, Atkins, & Nixon, 1992) have investigated the effects of video game habits and found a positive correlation between video game habits and an increase in aggressive behavior. However, few studies have differentiated between violent and non-violent video games. Fewer still have looked at differences in the subjects' preexisting hostility or aggression. A growing number of experimental studies (e.g., Cooper & Mackie, 1986; Silvern & Williamson, 1987; Schuttte, Malouff, Post-Gorden, & Rodasta, 1988; Irwin & Gross, 1995; Anderson & Dill, 2000) have shown support for the hypothesis that violent video games lead to an increase in laboratory aggression. A meta-analytic study (Anderson & Bushman, in press-a) found that, across 54 independent tests of the relation between video game violence and aggression, involving 4,262 participants, the average effect size was both positive and significant. The General Aggression Model The General Aggression Model (GAM) and its relation to violent video games has been described by Anderson and Dill (2000). The GAM seeks to explain aggressive behavior in children after playing violent video games. This model describes a “multi-stage process by which personological (e.g., aggressive personality) and situational (e.g., video game play and provocation) input variables lead to aggressive behavior. They do so by influencing several related internal states and the outcomes of automatic and controlled appraisal (or decision) processes” (Anderson & Dill, 2000, p. 773). The GAM is relevant to the study of violent video games for several reasons. One reason is that it differentiates between short and long term effects of video game violence on the game-player. With regard to the short-term effects of violent video games, the GAM predicts that both kinds of Violent Video Games 3 input variables, person and situation, can influence the present internal state of the person. The GAM further describes the internal state of a person with cognitive, affective, and arousal variables. Summarizing the GAM’s predictions for the effects of violent video games on children’s behavior, Anderson and Dill drew the following conclusions: “Short-term violent video game increases in aggression are expected by [the model] whenever exposure to violent media primes aggressive thoughts, increases hostile feeling, or increases arousal” (Anderson & Dill, 2000, p. 774). The GAM describes the long term effects of violent video games as a result of the development, over-learning, and reinforcement of aggression-related knowledge structures. These knowledge structures include vigilance for enemies (i.e., hostile attribution bias), aggressive action against others, expectations that others will behave aggressively, positive attitudes towards the use of violence, and the belief that violent solutions are effective and appropriate. Repeated exposure to graphic scenes of violence is also postulated to be desensitizing. Furthermore, it is predicted that long term game-players become more aggressive in outlook, perceptual biases, attitudes, beliefs, and behavior than they were before the repeated exposure. Two studies were conducted to test the efficacy of the GAM in predicting aggression from violent video game play (Anderson & Dill, 2000). In the first study, it was found that real-life video game play was positively related to aggressive behavior and delinquency (long-term effects). The relationship was stronger for individuals who were characteristically aggressive. In addition, amount of video game play was negatively related to school performance. In the second study, laboratory exposure to a graphically violent video game increased aggressive thoughts and behavior (short-term effects), although there was no moderating effect of hostility (i.e., aggressive personality). Both of these studies were consistent with the main hypotheses regarding the GAM and video game violence. Lynch’s research on the physiological effects of violent video games (Lynch, 1994; Lynch, 1999) lends further credibility to the GAM. Lynch's results are consistent with a recent metaanalysis of seven independent tests showing that blood pressure and heart rate increase with exposure to violent video games (Anderson & Bushman, in press-a). This research demonstrates that hostility in adolescence is directly related to physiological reactivity to violent video games. It also demonstrates the efficacy of the GAM for predicting arousal measures, one of the three internal states described by the GAM that may lead to aggression. The GAM also predicts that long-term effects of violent video games will appear in a number of other areas, including hostile attribution bias, desensitization, and aggressive behaviors (such as physical fights). Children who tend to interpret ambiguous social cues as being of hostile intent (i.e., have a hostile attribution bias) are hypothesized to be more aggressive. This hypothesized relationship has been confirmed consistently across a wide range of samples ranging from early childhood through adulthood, and across a number of studies (e.g., Crick & Dodge, 1994; Dill, Anderson, Anderson, & Deuser, 1997). Furthermore, there is a robust relationship between hostile attribution bias and children’s social maladjustment, such as depression, negative selfperceptions, and peer rejection (Crick, 1995). Based on the GAM, we predict that long-term exposure to violent video games (or other violent media) may create a predisposition to interpret others’ actions as having malignant intent. Following this logic, if children come to have a greater hostile attribution bias from repeated, extended exposure to violent video games over time, it is also likely that they would become engaged in more aggressive behaviors such as arguments and physical fights. The current research is designed to test four hypotheses. First, video game violence exposure is positively correlated with seeing the world as a more hostile place (hostile attribution bias). Second, video game violence exposure is positively correlated with arguments with teachers and physical fights, and is negatively correlated with academic performance. Third, trait hostility will Violent Video Games 4 be positively correlated with video game violence exposure. Fourth, limiting the amount of violent video game play, either
On active learning of record matching packages
We consider the problem of learning a record matching package (classifier) in an active learning setting. In active learning, the learning algorithm picks the set of examples to be labeled, unlike more traditional passive learning setting where a user selects the labeled examples. Active learning is important for record matching since manually identifying a suitable set of labeled examples is difficult. Previous algorithms that use active learning for record matching have serious limitations: The packages that they learn lack quality guarantees and the algorithms do not scale to large input sizes. We present new algorithms for this problem that overcome these limitations. Our algorithms are fundamentally different from traditional active learning approaches, and are designed ground up to exploit problem characteristics specific to record matching. We include a detailed experimental evaluation on realworld data demonstrating the effectiveness of our algorithms.
Protocol / s Verification in the context of IoT based on E Theorem Prover
We present in this short communication “E Theorem Prover and Verification of Smart Devices & Protocols Based on IoT Environments – A Novel Suggestion on Informatics Framework of Smart Devices involving Hardware,Firmware & Software Verification.” keywords: E Theorem Prover/Smart Devices/Hardware/Firmware/Software/IoT Introduction & Inspiration : Based on the information contained in the published literature it is very much inspirational to use novel methodologies to probe hardware/firmware/software in an innovative way.E Theorem Prover is an excellent high performance tool to design develop,implement and test novel methodologies.We wish to highlight the importance of E Theorem Prover in the context of “Smart Devices” and their testing.Since E Theorem Prover is developed in C it is useful to test and verify “Smart Devices”.Refs[1-9] “E is a high performance theorem prover for full first-order logic with equality.It is based on the equational superposition calculus and uses a purely equational paradigm. It has been integrated into other theorem provers and it has been among the best-placed systems in several theorem proving competitions. E is developed by Stephan Schulz, originally in the Automated Reasoning Group at TU Munich.” Source : https://en.wikipedia.org/wiki/E_theorem_prover Informatics Framework & Implementation : Figure I. Approximate Informatics Framework to probe Smart Devices/Protocols/IoT/Algorithms [Based on Refs[1-9] && Additional Information on Software Used] Please Note : Figure I & Figure II -Actual Connections/Interfacing with the concerned computing environments or testing methodologies will vary to some extent. Readers are advised to check the requirements before using them.This is an attempt to inspire others to use Theorem Provers in Verification. For Example : We would like to suggest “Heuristics Approach” in designing,implementing and testing of verification methodologies involving hardware/firmware/software/IoT concepts.For more information please refer to additional information links and the references mentioned in this paper. Future Direction To Implement Verification Methodologies : ( in extending our Testing/Verification Framework Using Python based Environments ) Figure II. Approximate Informatics Framework based on Python /E Prover to implement “Complex Test Environment” for large scale projects. Source of inspiration : https://www.design-reuse.com/articles/15886/a-phyton-based-soc-validationand-test-environment.html
" Sam " Neurofeedback in Der Kinder-und Jugendpsychiatrie Göttingen Englisch: Attention-deficit Hyperactivity Disorder (adhd)
Die Aufmerksamkeitsdefizit-/Hyperaktivitätsstörung (ADHS) 1 ist eine der häufigsten psychiatrischen Störungen des Kindes-und Jugendalters. Die betroffenen Kinder zeigen als wesentliche Kennzeichen (unterschiedlicher Ausprägung)-einen Mangel an Konzentration und Ausdauer und eine Tendenz, von einer Tätigkeit zu einer anderen zu wechseln, ohne etwas zu Ende zu bringen (Unaufmerksamkeit),-einen deutlichen Mangel an kognitiver und emotionaler Impulskontrolle (Impulsivität) und / oder-eine überschießende allgemein erhöhte motorische Aktivität (Hyperaktivität/motorische Unruhe).
Goalie Actions Attackers and Defenders Goalie Role Allocation Layer Action Selection Layer Vector Field Action Shoot Action Action Selection Layer Vector Field Action Shoot Action Action Selection Layer Vector Field Action Shoot Action Action Selection Layer Vector Field Action
The robot soccer system is being used as a test bed to develop the next generation of field robots. In the multiagent system, action selection is important for the cooperation and coordination among agents. There are many techniques in choosing a proper action of the agent. As the environment is dynamic, reinforcement learning is more suitable than supervised learning. Reinforcement learning is based on trial and error method through experience. Because of this, reinforcement learning is applied to many practical problems. However, straightforward application of reinforcement learning algorithm may not successfully scale up to more complex multi-agent learning problems. To solve this problem, modular Q-learning is employed for multi-agent learning in reinforcement methods. A modified uni-vector field is used for robot navigation. This paper discusses the Y2K2 NaroSot(Nano-Robot World Cup Soccer Tournament) robot soccer system.
Predictive Effects of Novelty Measured by Temporal Embeddings on the Growth of Scientific Literature
Novel scientific knowledge is constantly produced by the scientific community. Understanding the level of novelty characterized by scientific literature is key for modeling scientific dynamics and analyzing the growth mechanisms of scientific knowledge. Metrics derived from bibliometrics and citation analysis were effectively used to characterize the novelty in scientific development. However, time is required before we can observe links between documents such as citation links or patterns derived from the links, which makes these techniques more effective for retrospective analysis than predictive analysis. In this study, we present a new approach to measuring the novelty of a research topic in a scientific community over a specific period by tracking semantic changes of the terms and characterizing the research topic in their usage context. The semantic changes are derived from the text data of scientific literature by temporal embedding learning techniques. We validated the effects of the proposed novelty metric on predicting the future growth of scientific publications and investigated the relations between novelty and growth by panel data analysis applied in a largescale publication dataset (MEDLINE/PubMed). Key findings based on the statistical investigation indicate that the novelty metric has significant predictive effects on the growth of scientific literature and the predictive effects may last for more than ten years. We demonstrated the effectiveness and practical implications of the novelty metric in three case studies. ∗[email protected], [email protected]. Department of Information Science, Drexel University. 1 ar X iv :1 80 1. 09 12 1v 1 [ cs .D L ] 2 7 Ja n 20 18
Study on the feasibility of LoRaWAN for smart city applications
Recent research on Low Power Wide Area Network (LPWAN) technologies which provide the capability of serving massive low power devices simultaneously has been very attractive. The LoRaWAN standard is one of the most successful developments. Commercial pilots are seen in many countries around the world. However, the feasibility of large scale deployments, for example, for smart city applications need to be further investigated. This paper provides a comprehensive case study of LoRaWAN to show the feasibility, scalability, and reliability of LoRaWAN in realistic simulated scenarios, from both technical and economic perspectives. We develop a Matlab based LoRaWAN simulator to offer a software approach of performance evaluation. A practical LoRaWAN network covering Greater London area is implemented. Its performance is evaluated based on two typical city monitoring applications. We further present an economic analysis and develop business models for such networks, in order to provide a guideline for commercial network operators, IoT vendors, and city planners to investigate future deployments of LoRaWAN for smart city applications.
Reduced lung-cancer mortality with low-dose computed tomographic screening.
BACKGROUND The aggressive and heterogeneous nature of lung cancer has thwarted efforts to reduce mortality from this cancer through the use of screening. The advent of low-dose helical computed tomography (CT) altered the landscape of lung-cancer screening, with studies indicating that low-dose CT detects many tumors at early stages. The National Lung Screening Trial (NLST) was conducted to determine whether screening with low-dose CT could reduce mortality from lung cancer. METHODS From August 2002 through April 2004, we enrolled 53,454 persons at high risk for lung cancer at 33 U.S. medical centers. Participants were randomly assigned to undergo three annual screenings with either low-dose CT (26,722 participants) or single-view posteroanterior chest radiography (26,732). Data were collected on cases of lung cancer and deaths from lung cancer that occurred through December 31, 2009. RESULTS The rate of adherence to screening was more than 90%. The rate of positive screening tests was 24.2% with low-dose CT and 6.9% with radiography over all three rounds. A total of 96.4% of the positive screening results in the low-dose CT group and 94.5% in the radiography group were false positive results. The incidence of lung cancer was 645 cases per 100,000 person-years (1060 cancers) in the low-dose CT group, as compared with 572 cases per 100,000 person-years (941 cancers) in the radiography group (rate ratio, 1.13; 95% confidence interval [CI], 1.03 to 1.23). There were 247 deaths from lung cancer per 100,000 person-years in the low-dose CT group and 309 deaths per 100,000 person-years in the radiography group, representing a relative reduction in mortality from lung cancer with low-dose CT screening of 20.0% (95% CI, 6.8 to 26.7; P=0.004). The rate of death from any cause was reduced in the low-dose CT group, as compared with the radiography group, by 6.7% (95% CI, 1.2 to 13.6; P=0.02). CONCLUSIONS Screening with the use of low-dose CT reduces mortality from lung cancer. (Funded by the National Cancer Institute; National Lung Screening Trial ClinicalTrials.gov number, NCT00047385.).
Using massively multiplayer online role playing games (MMORPGs) to support second language learning: Action research in the real and virtual world
Massively Multiplayer Online Role Playing Games (MMORPGs) create large virtual communities. Online gaming shows potential not just for entertaining, but also for education. The aim of this research project is to investigate the use of commercial MMORPGs to support second language teaching. MMORPGs offer a digital safe space in which students can communicate by using their target language with global players. This qualitative research based on ethnography and action research investigates the students’ experiences of language learning and performing while they play in the MMORPGs. Research was conducted in both the ‘real’ and ‘virtual’ worlds. In the real world the researcher observes the interaction with the MMORPGs by the students through actual discussion, and screen video captures while they are playing. In the virtual world, the researcher takes on the role of a character in the MMORPG enabling the researcher to get an inside point of view of the students and their own MMORPG characters. This latter approach also uses action research to allow the researcher to provide anonymous/private support to the students including in-game instruction, confidence building, and some support of language issues in a safe and friendly way. Using action research with MMORPGs in the real world facilitates a number of opportunities for learning and teaching including opportunities to practice language and individual and group experiences of communicating with other native/ second language speakers for the students. The researcher can also develop tutorial exercises and discussion for teaching plans based on the students’ experiences with the MMORPGs. The results from this research study demonstrate that MMORPGs offer a safe, fun, informal and effective learning space for supporting language teaching. Furthermore the use of MMORPGs help the students’ confidence in using their second language and provide additional benefits such as a better understanding of the culture and use of language in different contexts.
Automotive Mechatronic Systems : A Curriculum Outline
Abstract In this paper, a review of mechatronics is presented with emphasis on the desired qualities of university graduates as viewed by industry. A detailed description of a sequence of two new courses on automotive mechatronic systems and a specialized laboratory are discussed. The new coursework has been added to the curriculum towards the degree of systems engineer at Oakland University and will be first offered in Fall '97. The development of the courses and the laboratory has been supported by Ford Motor Company.