title
stringlengths
8
300
abstract
stringlengths
0
10k
From Big Data to Big Service
Big service-the convergence and collaboration of big data systems-presents a solid solution to the challenges brought by big data.
Fraud Deterrence in Dynamic Mirrleesian Economies
Social and private insurance schemes rely on legal action to deter fraud and tax evasion. This observation guides the authors to introduce a random state verification technology in a dynamic economy with private information. With some probability, an agent's skill level becomes known to the planner, who prescribes a punishment if the agent is caught misreporting. The authors show how deferring consumption can ease the provision of incentives. As a result, the marginal benefit may be below the marginal cost of investment in the constrained-efficient allocation, suggesting a subsidy on savings. They characterize conditions such that the intertemporal wedge is negative in finite horizon economies. In an infinite horizon economy, the authors find that the constrained-efficient allocation converges to a high level of consumption, full insurance, and no labor distortions for any probability of state verification.
Compiling path queries in software-defined networks
Monitoring the flow of traffic along network paths is essential for SDN programming and troubleshooting. For example, traffic engineering requires measuring the ingress-egress traffic matrix; debugging a congested link requires determining the set of sources sending traffic through that link; and locating a faulty device might involve detecting how far along a path the traffic makes progress. Past path-based monitoring systems operate by diverting packets to collectors that perform "after-the-fact" analysis, at the expense of large data-collection overhead. In this paper, we show how to do more efficient "during-the-fact" analysis. We introduce a query language that allows each SDN application to specify queries independently of the forwarding state or the queries of other applications. The queries use a regular-expression-based path language that includes SQL-like "groupby" constructs for count aggregation. We track the packet trajectory directly on the data plane by converting the regular expressions into an automaton, and tagging the automaton state (i.e., the path prefix) in each packet as it progresses through the network. The SDN policies that implement the path queries can be combined with arbitrary packet-forwarding policies supplied by other elements of the SDN platform. A preliminary evaluation of our prototype shows that our "during-the-fact" strategy reduces data-collection overhead over "after-the-fact" strategies.
Influence of tobacco use in dental caries development.
This review article describes different forms of tobacco usage and its direct relationship with the prevalence of dental caries. Smoking along with co-existing factors like old age, bad oral hygiene habits, food habits, limited preventive dental visits and over all health standards, can be associated with high caries incidence. However, a direct etiological relationship is lacking. Environmental tobacco smoke (ETS) causes dental caries in children but no studies have been reported in adults. Existing findings are not sufficient and conclusive enough to confirm that ETS causes dental caries. Oral use of smokeless tobacco (ST), predominantly tobacco chewing, is presumably a positive contributing factor to higher incidence of dental caries. Unfortunately, published studies are not converging towards one single factor through which tobacco usage can have direct relationship to dental caries.
Smart Electronic Wheelchair Using Arduino and Bluetooth Module
This paper describes the design of a smart, motorized, voice controlled wheelchair using embedded system. Proposed design supports voice activation system for physically differently abled persons incorporating manual operation. This paper represents the “Voice-controlled Wheel chair” for the physically differently abled person where the voice command controls the movements of the wheelchair. The voice command is given through a cellular device having Bluetooth and the command is transferred and converted to string by the BT Voice Control for Arduino and is transferred to the Bluetooth Module SR-04connected to the Arduino board for the control of the Wheelchair. For example, when the user says „Go‟ then chair will move in forward direction and when he says „Back‟ then the chair will move in backward direction and similarly „Left‟, „Right‟ for rotating it in left and right directions respectively and „Stop‟ for making it stop. This system was designed and developed to save cost, time and energy of the patient. Ultrasonic sensor is also made a part of the design and it helps to detect obstacles lying ahead in the way of the wheelchair that can hinder the passage of the wheelchair.
A 1 W 104 dB SNR Filter-Less Fully-Digital Open-Loop Class D Audio Amplifier With EMI Reduction
This paper presents the design and implementation of a high-performance fully-digital PWM DAC and switching output stage which can drive a speaker in portable devices, including cellular phones. Thanks to the quaternary pulse-width modulation scheme, filter-less implementation are possible. A pre-modulation DSP algorithm eliminates the harmonic distortion inherent to the employed modulation process, and an oversampling noise shaper reduces the modulator clock speed to facilitate the hardware implementation while keeping high-fidelity quality. Radiated electromagnetic field emission of the class D amplifier is reduced thanks to a clock spreading technique with only a minor impact on audio performance characteristics. Clock jitter effects on the audio amplifier performance are presented, showing very low degradation for jitter value up to a few nanoseconds. The digital section works with a 1.2 V power supply voltage, while the output switching stage and its driver are supplied from a high-efficiency DC-DC converter either at 3.6 V or 5 V. An output power of 0.5 W at 3.6 V and 1 W at 5 V over an 8 Ω load with efficiency (digital section included) of about 79% and 81%, respectively, has been achieved. The total harmonic distortion (THD) at maximum output level is about 0.2%, while the dynamic range is 104 dB A-weighted. The active area is about 0.94 mm2 in a 0.13 μm single-poly, five-metal, N-well digital CMOS technology with double-oxide option (0.5 μm minimum length).
MaterialSim : An Agent-Based Simulation Toolkit for Learning Materials Science
Computer modeling and simulation is progressively penetrating the engineering workplace across a wide variety of contexts. However, their use in undergraduate engineering education lags far behind. Although there are many commercial simulation packages available, most background and “black-box” the actual mathematical and physical models used in the simulation engine, assigning a passive role to students as they do not have access to the underlying dynamics. This paper reports on a user study of MaterialSim, a Materials Science agent-based set of microworlds built in the NetLogo modeling-and-simulation environment, for investigating crystallization, solidification, metallic grain growth and annealing. Six undergraduate students enrolled in an introductory Materials Science course participated in the study, in which they could run experiments and build models. This design-based research builds on previous studies that have suggested the benefits of multi-agent simulation for understanding how a variety of complex behaviors in science derive from simple, local rules. Whereas professional simulation tools in Engineering are targeted for “modeling-for-doing”, emphasizing aggregatelevel simulations to predict macroscopic variables, MaterialSim was built within a “modeling-for-understanding” framework, which focuses on agent-level elementary behaviors that bring about emergent macroscopic behaviors. The rationale for the design is that the agent-based perspective may foster deeper understanding of the relevant scientific phenomena. A core feature of this design is that students can apply a small number of rules to capture fundamental causality structures underlying behaviors in a range of apparently disparate phenomena within a domain.. We present evidence in the form of excerpts and samples of students’ work which demonstrate that the experience with the tool enabled them to identify and understand some of the unifying principles across phenomena and build sophisticated new models based on those. Index Terms ⎯ Agent-based modeling, Constructivism, Engineering education, Materials Science.
A Cognitive Model for the Representation and Acquisition of Verb Selectional Preferences
We present a cognitive model of inducing verb selectional preferences from individual verb usages. The selectional preferences for each verb argument are represented as a probability distribution over the set of semantic properties that the argument can possess—asemantic profile . The semantic profiles yield verb-specific conceptualizations of the arguments associated with a syntactic position. The proposed model can learn appropriate verb profiles from a small set of noisy training data, and can use them in simulating human plausibility judgments and analyzing implicit object alternation.
Relative Importance of Coral Cover, Habitat Complexity and Diversity in Determining the Structure of Reef Fish Communities
The structure of coral reef habitat has a pronounced influence on the diversity, composition and abundance of reef-associated fishes. However, the particular features of the habitat that are most critical are not always known. Coral habitats can vary in many characteristics, notably live coral cover, topographic complexity and coral diversity, but the relative effects of these habitat characteristics are often not distinguished. Here, we investigate the strength of the relationships between these habitat features and local fish diversity, abundance and community structure in the lagoon of Lizard Island, Great Barrier Reef. In a spatial comparison using sixty-six 2m(2) quadrats, fish species richness, total abundance and community structure were examined in relation to a wide range of habitat variables, including topographic complexity, habitat diversity, coral diversity, coral species richness, hard coral cover, branching coral cover and the cover of corymbose corals. Fish species richness and total abundance were strongly associated with coral species richness and cover, but only weakly associated with topographic complexity. Regression tree analysis showed that coral species richness accounted for most of the variation in fish species richness (63.6%), while hard coral cover explained more variation in total fish abundance (17.4%), than any other variable. In contrast, topographic complexity accounted for little spatial variation in reef fish assemblages. In degrading coral reef environments, the potential effects of loss of coral cover and topographic complexity are often emphasized, but these findings suggest that reduced coral biodiversity may ultimately have an equal, or greater, impact on reef-associated fish communities.
Video Google: Efficient Visual Search of Videos
We describe an approach to object retrieval which searches for and localizes all the occurrences of an object in a video, given a query image of the object. The object is represented by a set of viewpoint invariant region descriptors so that recognition can proceed successfully despite changes in viewpoint, illumination and partial occlusion. The temporal continuity of the video within a shot is used to track the regions in order to reject those that are unstable. Efficient retrieval is achieved by employing methods from statistical text retrieval, including inverted file systems, and text and document frequency weightings. This requires a visual analogy of a word which is provided here by vector quantizing the region descriptors. The final ranking also depends on the spatial layout of the regions. The result is that retrieval is immediate, returning a ranked list of shots in the manner of Google. We report results for object retrieval on the full length feature films ‘Groundhog Day’ and ‘Casablanca’.
Neural correlates of superior intelligence: Stronger recruitment of posterior parietal cortex
General intelligence (g) is a common factor in diverse cognitive abilities and a major influence on life outcomes. Neuroimaging studies in adults suggest that the lateral prefrontal and parietal cortices play a crucial role in related cognitive activities including fluid reasoning, the control of attention, and working memory. Here, we investigated the neural bases for intellectual giftedness (superior-g) in adolescents, using fMRI. The participants consisted of a superior-g group (n = 18, mean RAPM = 33.9 +/- 0.8, >99%) from the national academy for gifted adolescents and the control group (n = 18, mean RAPM = 22.8 +/- 1.6, 60%) from local high schools in Korea (mean age = 16.5 +/- 0.8). fMRI data were acquired while they performed two reasoning tasks with high and low g-loadings. In both groups, the high g-loaded tasks specifically increased regional activity in the bilateral fronto-parietal network including the lateral prefrontal, anterior cingulate, and posterior parietal cortices. However, the regional activations of the superior-g group were significantly stronger than those of the control group, especially in the posterior parietal cortex. Moreover, regression analysis revealed that activity of the superior and intraparietal cortices (BA 7/40) strongly covaried with individual differences in g (r = 0.71 to 0.81). A correlated vectors analysis implicated bilateral posterior parietal areas in g. These results suggest that superior-g may not be due to the recruitment of additional brain regions but to the functional facilitation of the fronto-parietal network particularly driven by the posterior parietal activation.
The Role of Isotretinoin Therapy for Cushing's Disease: Results of a Prospective Study.
Objective. This prospective open trial aimed to evaluate the efficacy and safety of isotretinoin (13-cis-retinoic acid) in patients with Cushing's disease (CD). Methods. Sixteen patients with CD and persistent or recurrent hypercortisolism after transsphenoidal surgery were given isotretinoin orally for 6-12 months. The drug was started on 20 mg daily and the dosage was increased up to 80 mg daily if needed and tolerated. Clinical, biochemical, and hormonal parameters were evaluated at baseline and monthly for 6-12 months. Results. Of the 16 subjects, 4% (25%) persisted with normal urinary free cortisol (UFC) levels at the end of the study. UFC reductions of up to 52.1% were found in the rest. Only patients with UFC levels below 2.5-fold of the upper limit of normal achieved sustained UFC normalization. Improvements of clinical and biochemical parameters were also noted mostly in responsive patients. Typical isotretinoin side-effects were experienced by 7 patients (43.7%), though they were mild and mostly transient. We also observed that the combination of isotretinoin with cabergoline, in relatively low doses, may occasionally be more effective than either drug alone. Conclusions. Isotretinoin may be an effective and safe therapy for some CD patients, particularly those with mild hypercortisolism.
Categories and particulars: prototype effects in estimating spatial location.
A model of category effects on reports from memory is presented. The model holds that stimuli are represented at 2 levels of detail: a fine-grain value and a category. When memory is inexact but people must report an exact value, they use estimation processes that combine the remembered stimulus value with category information. The proposed estimation processes include truncation at category boundaries and weighting with a central (prototypic) category value. These processes introduce bias in reporting even when memory is unbiased, but nevertheless may improve overall accuracy (by decreasing the variability of reports). Four experiments are presented in which people report the location of a dot in a circle. Subjects spontaneously impose horizontal and vertical boundaries that divide the circle into quadrants. They misplace dots toward a central (prototypic) location in each quadrant, as predicted by the model. The proposed model has broad implications; notably, it has the potential to explain biases of the sort described in psychophysics (contraction bias and the bias captured by Weber's law) as well as symmetries in similarity judgments, without positing distorted representations of physical scales.
SCA-CNN: Spatial and Channel-Wise Attention in Convolutional Networks for Image Captioning
Visual attention has been successfully applied in structural prediction tasks such as visual captioning and question answering. Existing visual attention models are generally spatial, i.e., the attention is modeled as spatial probabilities that re-weight the last conv-layer feature map of a CNN encoding an input image. However, we argue that such spatial attention does not necessarily conform to the attention mechanism — a dynamic feature extractor that combines contextual fixations over time, as CNN features are naturally spatial, channel-wise and multi-layer. In this paper, we introduce a novel convolutional neural network dubbed SCA-CNN that incorporates Spatial and Channel-wise Attentions in a CNN. In the task of image captioning, SCA-CNN dynamically modulates the sentence generation context in multi-layer feature maps, encoding where (i.e., attentive spatial locations at multiple layers) and what (i.e., attentive channels) the visual attention is. We evaluate the proposed SCA-CNN architecture on three benchmark image captioning datasets: Flickr8K, Flickr30K, and MSCOCO. It is consistently observed that SCA-CNN significantly outperforms state-of-the-art visual attention-based image captioning methods.
Behavioral control of unmanned aerial vehicle manipulator systems
In this paper a behavioral control framework is developed to control anunmanned aerial vehicle-manipulator (UAVM) system, composed by a multirotor aerial vehicle equipped with a robotic arm. The goal is to ensure vehiclearm coordination and manage complex multi-task missions, where different behaviors must be encompassed in a clear and meaningful way. In detail, a control scheme, based on the null space-based behavioral paradigm, is proposed to hanB F. Pierri [email protected] K. Baizid [email protected] G. Giglio [email protected] M. A. Trujillo [email protected] G. Antonelli [email protected] F. Caccavale [email protected] A. Viguria [email protected] S. Chiaverini [email protected] A. Ollero [email protected] 1 Mines Douai, IA 59508 Douai, France 2 Univ. Lille, 59000 Lille, France 3 University of Basilicata, Potenza, Italy 4 Center for Advanced Aerospace Technologies (CATEC), Seville, Spain 5 University of Cassino and Southern Lazio, Cassino, Italy 6 University of Seville, Seville, Spain dle the coordination between the arm and vehicle motion. To this aim, a set of basic functionalities (elementary behaviors) are designed and combined in a given priority order, in order to attain more complex tasks (compound behaviors). A supervisor is in charge of switching between the compound behaviors according to the mission needs and the sensory feedback. The method is validated on a real testbed, consisting of a multirotor aircraft with an attached 6 Degree of Freedoms manipulator, developed within the EU-funded project ARCAS (Aerial Robotics Cooperative Assembly System). At the the best of authors’ knowledge, this is the first time that an UAVM system is experimentally tested in the execution of complex multi-task missions. The results show that, by properly designing a set of compound behaviors and a supervisor, vehicle-arm coordination in complex missions can be effectively managed.
The relationship of disordered eating habits and attitudes to clinical outcomes in young adult females with type 1 diabetes.
OBJECTIVE To describe the clinical outcomes of adolescent and young adult female subjects with type 1 diabetes in relation to the disturbance of eating habits and attitudes over 8-12 years. RESEARCH DESIGN AND METHODS Patients were recruited from the registers of pediatric and young adult diabetes clinics (including nonattenders) and interviewed in the community. A total of 87 patients were assessed at baseline (aged 11-25 years), and 63 (72%) were reinterviewed after 8-12 years (aged 20-38 years). Eating habits and attitudes were assessed by a semistructured research diagnostic interview (Eating Disorder Examination). RESULTS Clinical eating disorders ascertained from the interview and/or case note review at baseline or follow-up were found in 13 subjects (14.9% [95% CI 8.2-24.2]), and an additional 7 subjects had evidence of binging or purging, bringing the total affected to 26%. Insulin misuse for weight control was reported by 31 (35.6% [25.7-46.6]) subjects. Overall outcome was poor; serious microvascular complications were common and mortality was high. There were significant relationships between disordered eating habits, insulin misuse, and microvascular complications. CONCLUSIONS Although the cross-sectional prevalence of clinical eating disorders in young women with diabetes is modest, the cumulative incidence of eating problems continues to increase after young adulthood, and this is strongly associated with poor physical health outcomes. The combination of an eating disorder and diabetes puts patients at high risk of mortality and morbidity. Better methods of detection and management are needed.
Mercury distribution in two Sierran forest and one desert sagebrush steppe ecosystems and the effects of fire.
Mercury (Hg) concentration, reservoir mass, and Hg reservoir size were determined for vegetation components, litter, and mineral soil for two Sierran forest sites and one desert sagebrush steppe site. Mercury was found to be held primarily in the mineral soil (maximum depth of 60 to 100 cm), which contained more than 90% of the total ecosystem reservoir. However, Hg in foliage, bark, and litter plays a more dominant role in Hg cycling than the mineral soil. Mercury partitioning into ecosystem components at the Sierran forest sites was similar to that observed for other US forest sites. Vegetation and litter Hg reservoirs were significantly smaller in the sagebrush steppe system because of lower biomass. Data collected from these ecosystems after wildfire and prescribed burns showed a significant decrease in the Hg pool from certain reservoirs. No loss from mineral soil was observed for the study areas but data from fire severity points suggested that Hg in the upper few millimeters of surface soil may be volatilized due to exposure to elevated temperatures. Comparison of data from burned and unburned plots suggested that the only significant source of atmospheric Hg from the prescribed burn was combustion of litter. Differences in unburned versus burned Hg reservoirs at the forest wildfire site demonstrated that drastic reduction in the litter and above ground live biomass Hg reservoirs after burning had occurred. Sagebrush and litter were absent in the burned plots after a wildfire suggesting that both reservoirs were released during the fire. Mercury emissions due to fire from the forest prescribed burn, forest wildfire, and sagebrush steppe wildfire sites were roughly estimated at 2.0 to 5.1, 2.2 to 4.9, and 0.36+/-0.13 g ha(-1), respectively, with litter and vegetation being the most important sources.
Adaptive Fuzzy Output Feedback Control of MIMO Nonlinear Systems With Unknown Dead-Zone Inputs
This paper is concerned with the problem of adaptive fuzzy tracking control for a class of multi-input and multi-output (MIMO) strict-feedback nonlinear systems with both unknown nonsymmetric dead-zone inputs and immeasurable states. In this research, fuzzy logic systems are utilized to evaluate the unknown nonlinear functions, and a fuzzy adaptive state observer is established to estimate the unmeasured states. Based on the information of the bounds of the dead-zone slopes as well as treating the time-varying inputs coefficients as a system uncertainty, a new adaptive fuzzy output feedback control approach is developed via the backstepping recursive design technique. It is shown that the proposed control approach can assure that all the signals of the resulting closed-loop system are semiglobally uniformly ultimately bounded. It is also shown that the observer and tracking errors converge to a small neighborhood of the origin by selecting appropriate design parameters. Simulation examples are also provided to illustrate the effectiveness of the proposed approach.
Improving evidence-based primary care for chronic kidney disease: study protocol for a cluster randomized control trial for translating evidence into practice (TRANSLATE CKD)
BACKGROUND Chronic kidney disease (CKD) and end stage renal disease (ESRD) are steadily increasing in prevalence in the United States. While there is reasonable evidence that specific activities can be implemented by primary care physicians (PCPs) to delay CKD progression and reduce mortality, CKD is under-recognized and undertreated in primary care offices, and PCPs are generally not familiar with treatment guidelines. The current study addresses the question of whether the facilitated TRANSLATE model compared to computer decision support (CDS) alone will lead to improved evidence-based care for CKD in primary care offices. METHODS/DESIGN This protocol consists of a cluster randomized controlled trial (CRCT) followed by a process and cost analysis. Only practices providing ambulatory primary care as their principal function, located in non-hospital settings, employing at least one primary care physician, with a minimum of 2,000 patients seen in the prior year, are eligible. The intervention will occur at the cluster level and consists of providing CKD-specific CDS versus CKD-specific CDS plus practice facilitation for all elements of the TRANSLATE model. Patient-level data will be collected from each participating practice to examine adherence to guideline-concordant care, progression of CKD and all-cause mortality. Patients are considered to meet stage three CKD criteria if at least two consecutive estimated glomerular filtration rate (eGFR) measurements at least three months apart fall below 60 ml/min. The process evaluation (cluster level) will determine through qualitative methods the fidelity of the facilitated TRANSLATE program and find the challenges and enablers of the implementation process. The cost-effectiveness analysis will compare the benefit of the intervention of CDS alone against the intervention of CDS plus TRANSLATE (practice facilitation) in relationship to overall cost per quality adjusted years of life. DISCUSSION This study has three major innovations. First, this study adapts the TRANSLATE method, proven effective in diabetes care, to CKD. Second, we are creating a generalizable CDS specific to the Kidney Disease Outcome Quality Initiative (KDOQI) guidelines for CKD. Additionally, this study will evaluate the effects of CDS versus CDS with facilitation and answer key questions regarding the cost-effectiveness of a facilitated model for improving CKD outcomes. The study is testing virtual facilitation and Academic detailing making the findings generalizable to any area of the country. TRIAL REGISTRATION Registered as NCT01767883 on clinicaltrials.gov
A bound on the label complexity of agnostic active learning
We study the label complexity of pool-based active learning in the agnostic PAC model. Specifically, we derive general bounds on the number of label requests made by the A2 algorithm proposed by Balcan, Beygelzimer & Langford (Balcan et al., 2006). This represents the first nontrivial general-purpose upper bound on label complexity in the agnostic PAC model.
MVR: An Architecture for Computation Offloading in Mobile Edge Computing
As communication and sensing capabilities of mobile devices increase, mobile applications are becoming increasingly complex. The ability of computation offloading, which is one of the main features of mobile edge computing gains relevance as a technique to improve the battery lifetime of mobile devices and increase the performance of applications. In this paper, we describe the offloading system model and present an innovative architecture, called "MVR", contributing to computation offloading in mobile edge computing.
Minimally-diluted blood cardioplegia supplemented with potassium and magnesium for combination of 'initial, continuous and intermittent bolus' administration.
BACKGROUND The present study was designed to examine the hypothesis that minimally-diluted blood cardioplegia (BCP) supplemented with potassium and magnesium provides superior myocardial protection in comparison with the standard-diluted BCP for a combination of 'initial, continuous, and intermittent bolus' BCP administration. METHODS AND RESULTS Seventy patients undergoing elective coronary revascularization between 1997 and 2001 (M : F =55:15, mean age 67.6+/-7.5 years) were randomly divided into 2 groups: Group C (n=35) was given the standard 4:1-diluted blood-crystalloid BCP, and Group M (n=35) was given minimally-diluted BCP supplemented with potassium-chloride and magnesium-sulfate. The BCP temperature was maintained at 30 degrees C. Cardioplegic arrest was induced with 2 min of initial antegrade BCP infusion, followed by continuous retrograde BCP infusion. Intermittent antegrade BCP was infused every 30 min for 2 min. The time required for achieving cardioplegic arrest was significantly shorter in Group M (47.5+/-16.3 vs 62.5+/-17.6 s, p<0.0001). The number of patients showing spontaneous heart beat recovery after reperfusion was significantly larger in Group M (28 vs 15, p=0.0029), and the number of patients suffering from atrial fibrillation during the postoperative period was significantly smaller in Group M (n=3 vs 11, p=0.034). Both the postoperative maximum dopamine dose (3.57+/-2.46 vs 5.44+/-2.23 microg/kg per min, p=0.0014) and peak creatine kinase-MB (19.5+/-8.5 vs 25.8+/-11.9 IU/L, p=0.0128) were significantly less in Group M. The number of patients showing paradoxical movement of the ventricular septum in the early postoperative echocardiography was significantly smaller in Group M (9 vs 24, p=0.0007). CONCLUSIONS These results suggest that 'initial, continuous and intermittent bolus' administration of minimally-diluted BCP supplemented with potassium and magnesium is a reliable and effective technique for intraoperative myocardial protection.
The edge-disjoint paths problem is NP-complete for series-parallel graphs
Many combinatorial problems are NP-complete for general graphs. However, when restricted to series–parallel graphs or partial k-trees, many of these problems can be solved in polynomial time, mostly in linear time. On the other hand, very few problems are known to be NP-complete for series–parallel graphs or partial k-trees. These include the subgraph isomorphism problem and the bandwidth problem. However, these problems are NP-complete even for trees. In this paper, we show that the edge-disjoint paths problem is NP-complete for series–parallel graphs and for partial 2-trees although the problem is trivial for trees and can be solved for outerplanar graphs in polynomial time. ? 2001 Elsevier Science B.V. All rights reserved.
SMILES2Vec: An Interpretable General-Purpose Deep Neural Network for Predicting Chemical Properties
Chemical databases store information in text representations, and the SMILES format is a universal standard used in many cheminformatics so‰ware. Encoded in each SMILES string is structural information that can be used to predict complex chemical properties. In this work, we develop SMILES2vec, a deep RNN that automatically learns features from SMILES to predict chemical properties, without the need for additional explicit feature engineering. Using Bayesian optimization methods to tune the network architecture, we show that an optimized SMILES2vec model can serve as a general-purpose neural network for predicting distinct chemical properties including toxicity, activity, solubility and solvation energy, while also outperforming contemporary MLP neural networks that uses engineered features. Furthermore, we demonstrate proof-of-concept of interpretability by developing an explanation mask that localizes on the most important characters used in making a prediction. When tested on the solubility dataset, it identi€ed speci€c parts of a chemical that is consistent with established €rst-principles knowledge with an accuracy of 88%. Our work demonstrates that neural networks can learn technically accurate chemical concept and provide state-of-the-art accuracy, making interpretable deep neural networks a useful tool of relevance to the chemical industry.
Effect of quinidine on maintaining sinus rhythm after conversion of atrial fibrillation or flutter. A multicentre study from Stockholm.
In a controlled study comprising 176 patients, quinidine in the form of Kinidin Durules was found to reduced significantly the recurrence of the atrial fibrillation during a 1-year follow-up period after successful electric shock conversion. After one year, 51 per cent (52/101) of the patients in the quinidine group, and 28 per cent (21/75) in the control group remained in sinus rhythm (P smaller than 0.001). No less than 43 per cent of the patients converted to sinus rhythm during treatment with maintenance doses of quinidine sulphate before intended DC conversion. Gastrointestinal side-effects were not uncommon, and caused interruption of quinidine treatment in some cases.
Securing cyberspace: Identifying key actors in hacker communities
As the computer becomes more ubiquitous throughout society, the security of networks and information technologies is a growing concern. Recent research has found hackers making use of social media platforms to form communities where sharing of knowledge and tools that enable cybercriminal activity is common. However, past studies often report only generalized community behaviors and do not scrutinize individual members; in particular, current research has yet to explore the mechanisms in which some hackers become key actors within their communities. Here we explore two major hacker communities from the United States and China in order to identify potential cues for determining key actors. The relationships between various hacker posting behaviors and reputation are observed through the use of ordinary least squares regression. Results suggest that the hackers who contribute to the cognitive advance of their community are generally considered the most reputable and trustworthy among their peers. Conversely, the tenure of hackers and their discussion quality were not significantly correlated with reputation. Results are consistent across both forums, indicating the presence of a common hacker culture that spans multiple geopolitical regions.
Automatic Modulation Classification of Overlapped Sources Using Multi-Gene Genetic Programming With Structural Risk Minimization Principle
As the spectrum environment becomes increasingly crowded and complicated, primary users may be interfered by secondary users and other illegal users. Automatic modulation classification (AMC) of a single source cannot recognize the overlapped sources. Consequently, the AMC of overlapped sources attracts much attention. In this paper, we propose a genetic programming-based modulation classification method for overlapped sources (GPOS). The proposed GPOS consists of two stages, the training stage, and the classification stage. In the training stage, multi-gene genetic programming (MGP)-based feature engineering transforms sample estimates of cumulants into highly discriminative MGP-features iteratively, until optimal MGP-features (OMGP-features) are obtained, where the structural risk minimization principle (SRMP) is employed to evaluate the classification performance of MGP-features and train the classifier. Moreover, a self-adaptive genetic operation is designed to accelerate the feature engineering process. In the classification stage, the classification decision is made by the trained classifier using the OMGP-features. Through simulation results, we demonstrate that the proposed scheme outperforms other existing methods in terms of classification performance and robustness in case of varying power ratios and fading channel.
Ensemble with neural networks for bankruptcy prediction
0957-4174/$ see front matter 2009 Elsevier Ltd. A doi:10.1016/j.eswa.2009.10.012 * Corresponding author. Tel.: +82 51 32
Topic-based Evaluation for Conversational Bots
Dialog evaluation is a challenging problem, especially for non task-oriented dialogs where conversational success is not well-defined. We propose to evaluate dialog quality using topic-based metrics that describe the ability of a conversational bot to sustain coherent and engaging conversations on a topic, and the diversity of topics that a bot can handle. To detect conversation topics per utterance, we adopt Deep Average Networks (DAN) and train a topic classifier on a variety of question and query data categorized into multiple topics. We propose a novel extension to DAN by adding a topic-word attention table that allows the system to jointly capture topic keywords in an utterance and perform topic classification. We compare our proposed topic based metrics with the ratings provided by users and show that our metrics both correlate with and complement human judgment. Our analysis is performed on tens of thousands of real human-bot dialogs from the Alexa Prize competition and highlights user expectations for conversational bots.
Can machine learning explain human learning?
Learning Analytics (LA) has a major interest in exploring and understanding the learning process of humans and, for this purpose, benefits from both Cognitive Science, which studies how humans learn, and Machine Learning, which studies how algorithms learn from data. Usually, Machine Learning is exploited as a tool for analyzing data coming from experimental studies, but it has been recently applied to humans as if they were algorithms that learn from data. One example is the application of Rademacher Complexity, which measures the capacity of a learning machine, to human learning, which led to the formulation of Human Rademacher Complexity (HRC). In this line of research, we propose here a more powerful measure of complexity, the Human Algorithmic Stability (HAS), as a tool to better understand the learning process of humans. The experimental results from three different empirical studies, on more than 600 engineering students from the University of Genoa, showed that HAS (i) can be measured without the assumptions required by HRC, (ii) depends not only on the knowledge domain, as HRC, but also on the complexity of the problem, and (iii) can be exploited for better understanding of the human learning process. & 2016 Elsevier B.V. All rights reserved.
Sentiment analysis using sentence minimization with natural language generation (NLG)
The analysis of feeling is used to define the attitude of a writer in relation to a subject or the appropriate global polarity of a document. The proposed work is to provide a platform in order to visualize the relative analysis of feedback for some particular product. In doing so, instead of the basic truthful information, the analysis will be done based on comments and comments developed from various sources. In this approach, the analysis of feeling at the document level will be carried out taking into account all aspects in the same way using natural language processing techniques. The present unsupervised method is used for sentence minimization that relies on a Stanford-type dependency for extracting information elements and compressed sentences are generated via a Natural language generation engine (NLG). An automatic evaluation of the same is done and F-scores of about 87.51 is achieved.
RIDI: Robust IMU Double Integration
This paper proposes a novel data-driven approach for inertial navigation, which learns to estimate trajectories of natural human motions just from an inertial measurement unit (IMU) in every smartphone. The key observation is that human motions are repetitive and consist of a few major modes (e.g., standing, walking, or turning). Our algorithm regresses a velocity vector from the history of linear accelerations and angular velocities, then corrects low-frequency bias in the linear accelerations, which are integrated twice to estimate positions. We have acquired training data with ground truth motion trajectories across multiple human subjects and multiple phone placements (e.g., in a bag or a hand). The qualitatively and quantitatively evaluations have demonstrated that our simple algorithm outperforms existing heuristic-based approaches and is even comparable to full Visual Inertial navigation to our surprise. As far as we know, this paper is the first to introduce supervised training for inertial navigation, potentially opening up a new line of research in the domain of data-driven inertial navigation. We will publicly share our code and data to facilitate further research.
A Pan-Cancer Proteogenomic Atlas of PI3K/AKT/mTOR Pathway Alterations.
Molecular alterations involving the PI3K/AKT/mTOR pathway (including mutation, copy number, protein, or RNA) were examined across 11,219 human cancers representing 32 major types. Within specific mutated genes, frequency, mutation hotspot residues, in silico predictions, and functional assays were all informative in distinguishing the subset of genetic variants more likely to have functional relevance. Multiple oncogenic pathways including PI3K/AKT/mTOR converged on similar sets of downstream transcriptional targets. In addition to mutation, structural variations and partial copy losses involving PTEN and STK11 showed evidence for having functional relevance. A substantial fraction of cancers showed high mTOR pathway activity without an associated canonical genetic or genomic alteration, including cancers harboring IDH1 or VHL mutations, suggesting multiple mechanisms for pathway activation.
Towards Clustering Validation in Big Data Context
Clustering1is an essential task in many areas such as machine learning, data mining and computer vision among others. Cluster validation aims to assess the quality of partitions obtained by clustering algorithms. Several indexes have been developed for cluster validation purpose. They can be external or internal depending on the availability of ground truth clustering. This paper deals with the issue of cluster validation of large data set. Indeed, in the era of big data this task becomes even more difficult to handle and requires parallel and distributed approaches. In this work, we are interested in external validation indexes. More specifically, this paper proposes a model for purity based cluster validation in parallel and distributed manner using Map-Reduce paradigm in order to be able to scale with increasing dataset sizes. The experimental results show that our proposed model is valid and achieves properly cluster validation of large datasets.
Compiling Pattern Matching
Introduction Pattern matching is a very powerful and useful device in programming. In functional languages it emerged in SASL [Turn76] and Hope [BursS0], and has also found its way into SML [Miln84]. The pattern mathing described here is that of LML which is a lazy ([Frie76] and [Henri76]) variant of ML. The pattern matching in LML evolved independently of that in SML so they are not (yet) the same, although very similar. The compilation of pattern matching in SML has been addressed in [Card84]. The LML compiler project began as an attempt to produce efficient code for a typed functional language with lazy evaluation. Since we regard pattern matching as an important language feature it should also yield efficient code. Only pattern matching in case expressions is described here, since we regard this as the basic pattern matching facility in the language. All other types of pattern mathing used in LML can be easily translated into case expressions, see [Augu84] for details. The compilation (of pattern matching) proceeds in several steps: • transform all pattern matching to case expressions. • transform complex case expressions into expressions that are easy to generate code for. • generate G-code for the case expressions, and from that machine code for the target machine.
Mining Trackman Golf Data
Recently, innovative technology like Trackman has made it possible to generate data describing golf swings. In this application paper, we analyze Trackman data from 275 golfers using descriptive statistics and machine learning techniques. The overall goal is to find non-trivial and general patterns in the data that can be used to identify and explain what separates skilled golfers from poor. Experimental results show that random forest models, generated from Trackman data, were able to predict the handicap of a golfer, with a performance comparable to human experts. Based on interpretable predictive models, descriptive statistics and correlation analysis, the most distinguishing property of better golfers is their consistency. In addition, the analysis shows that better players have superior control of the club head at impact and generally hit the ball straighter. A very interesting finding is that better players also tend to swing flatter. Finally, an outright comparison between data describing the club head movement and ball flight data, indicates that a majority of golfers do not hit the ball solid enough for the basic golf theory to apply.
RisQ: recognizing smoking gestures with inertial sensors on a wristband
Smoking-induced diseases are known to be the leading cause of death in the United States. In this work, we design RisQ, a mobile solution that leverages a wristband containing a 9-axis inertial measurement unit to capture changes in the orientation of a person's arm, and a machine learning pipeline that processes this data to accurately detect smoking gestures and sessions in real-time. Our key innovations are four-fold: a) an arm trajectory-based method that extracts candidate hand-to-mouth gestures, b) a set of trajectory-based features to distinguish smoking gestures from confounding gestures including eating and drinking, c) a probabilistic model that analyzes sequences of hand-to-mouth gestures and infers which gestures are part of individual smoking sessions, and d) a method that leverages multiple IMUs placed on a person's body together with 3D animation of a person's arm to reduce burden of self-reports for labeled data collection. Our experiments show that our gesture recognition algorithm can detect smoking gestures with high accuracy (95.7%), precision (91%) and recall (81%). We also report a user study that demonstrates that we can accurately detect the number of smoking sessions with very few false positives over the period of a day, and that we can reliably extract the beginning and end of smoking session periods.
Iris Image Classification Based on Hierarchical Visual Codebook
Iris recognition as a reliable method for personal identification has been well-studied with the objective to assign the class label of each iris image to a unique subject. In contrast, iris image classification aims to classify an iris image to an application specific category, e.g., iris liveness detection (classification of genuine and fake iris images), race classification (e.g., classification of iris images of Asian and non-Asian subjects), coarse-to-fine iris identification (classification of all iris images in the central database into multiple categories). This paper proposes a general framework for iris image classification based on texture analysis. A novel texture pattern representation method called Hierarchical Visual Codebook (HVC) is proposed to encode the texture primitives of iris images. The proposed HVC method is an integration of two existing Bag-of-Words models, namely Vocabulary Tree (VT), and Locality-constrained Linear Coding (LLC). The HVC adopts a coarse-to-fine visual coding strategy and takes advantages of both VT and LLC for accurate and sparse representation of iris texture. Extensive experimental results demonstrate that the proposed iris image classification method achieves state-of-the-art performance for iris liveness detection, race classification, and coarse-to-fine iris identification. A comprehensive fake iris image database simulating four types of iris spoof attacks is developed as the benchmark for research of iris liveness detection.
Detection of QRS Complexes in 12-lead ECG using Adaptive Quantized Threshold
The QRS complex is the most prominent wave component within the electrocardiogram. It reflects the electrical activity of heart during the ventricular contraction and the time of its occurrence. Its morphology provides information about the current state of the heart. The identification of QRS-complexes forms the basis for almost all automated ECG analysis algorithms. The presented algorithm employs a modified definition of slope, of ECG signal, as the feature for detection of QRS. A sequence of transformations of the filtered and baseline drift corrected ECG signal is used for extraction of a new modified slope-feature. Two feature-components are combined to derive the final QRSfeature signal. Multiple quantized amplitude thresholds are employed for distinguishing QRS-complexes from non-QRS regions of the ECG waveform. An adequate amplitude threshold is automatically selected by the presented algorithm and is utilized for delineating the QRS-complexes. A QRS detection rate of 98.56% with false positive and false negative percentage of 0.82% and 1.44% has been reported.
Best of Both Worlds: Transferring Knowledge from Discriminative Learning to a Generative Visual Dialog Model
We present a novel training framework for neural sequence models, particularly for grounded dialog generation. The standard training paradigm for these models is maximum likelihood estimation (MLE), or minimizing the cross-entropy of the human responses. Across a variety of domains, a recurring problem with MLE trained generative neural dialog models (G) is that they tend to produce ‘safe’ and generic responses (‘I don’t know’, ‘I can’t tell’). In contrast, discriminative dialog models (D) that are trained to rank a list of candidate human responses outperform their generative counterparts; in terms of automatic metrics, diversity, and informativeness of the responses. However, D is not useful in practice since it can not be deployed to have real conversations with users. Our work aims to achieve the best of both worlds – the practical usefulness of G and the strong performance of D – via knowledge transfer from D to G. Our primary contribution is an end-to-end trainable generative visual dialog model, where G receives gradients from D as a perceptual (not adversarial) loss of the sequence sampled from G. We leverage the recently proposed Gumbel-Softmax (GS) approximation to the discrete distribution – specifically, a RNN augmented with a sequence of GS samplers, coupled with the straight-through gradient estimator to enable end-to-end differentiability. We also introduce a stronger encoder for visual dialog, and employ a self-attention mechanism for answer encoding along with a metric learning loss to aid D in better capturing semantic similarities in answer responses. Overall, our proposed model outperforms state-of-the-art on the VisDial dataset by a significant margin (2.67% on recall@10). The source code can be downloaded from https://github.com/jiasenlu/visDial.pytorch
Investigation of Parameters Affecting Surface Roughness in CNC Routing Operation on Wooden EGP
The aim of this study was to evaluate the effect of CNC routing, using different parameters with the Taguchi experimental design, on the surface quality of various wooden (pine, spruce, and beech) edge-glued panels (EGP). The study evaluated five processing parameters: cutting direction, cutting depth, cutting width, feed rate, and spindle rotation speed, and their effects on surface roughness on pine, spruce, and beech EGP. Based on the results of statistical analysis of the burr surface roughness values, the mentioned parameters affected panels at varying levels. It was seen that the parameters were only responsible for ~34% (Rz) of the roughness on the surface of pine EGP, ~49% (Rz) of spruce EGP, and ~27% (Rq) of beech EGP. Statistically important parameters were as follows: cutting direction for pine, cutting depth (tip diameter) and feed rate for spruce, and cutting direction and feed rate for beech.
Efficient batchwise dropout training using submatrices
Dropout is a popular technique for regularizing artificial neural networks. Dropout networks are generally trained by minibatch gradient descent with a dropout mask turning off some of the units—a different pattern of dropout is applied to every sample in the minibatch. We explore a very simple alternative to the dropout mask. Instead of masking dropped out units by setting them to zero, we perform matrix multiplication using a submatrix of the weight matrix—unneeded hidden units are never calculated. Performing dropout batchwise, so that one pattern of dropout is used for each sample in a minibatch, we can substantially reduce training times. Batchwise dropout can be used with fully-connected and convolutional neural networks. 1 Independent versus batchwise dropout Dropout is a technique to regularize artificial neural networks—it prevents overfitting [8]. A fully connected network with two hidden layers of 80 units each can learn to classify the MNIST training set perfectly in about 20 training epochs—unfortunately the test error is quite high, about 2%. Increasing the number of hidden units by a factor of 10 and using dropout results in a lower test error, about 1.1%. The dropout network takes longer to train in two senses: each training epoch takes several times longer, and the number of training epochs needed increases too. We consider a technique for speeding up training with dropout—it can substantially reduce the time needed per epoch. Consider a very simple `-layer fully connected neural network with dropout. To train it with a minibatch of b samples, the forward pass is described by the equations: xk+1 = [xk · dk]×Wk k = 0, . . . , `− 1.
Punk Rock, Thatcher, and the Elsewhere of Northern Ireland: Rethinking the Politics of Popular Music
This essay focuses on the political efficacy of popular music to critique dominant ideologies concerning nationhood and personal identity. During Margaret Thatcher’s tenure as prime minister of Britain, punk rock and post-punk music flourished as a mode of expression to challenge the ways in which Thatcher’s conservative agenda affected British social life. This study examines the theoretical possibility that punk rock can serve as a mode of critique and addresses how punk rock music in Northern Ireland attempted to disrupt the Thatcherite position on the conflict in Northern Ireland (also known as The Troubles).
Ricochet Robots: A Transverse ASP Benchmark
A distinguishing feature of Answer Set Programming is its versatility. In addition to satisfiability testing, it offers various forms of model enumeration, intersection or unioning, as well as optimization. Moreover, there is an increasing interest in incremental and reactive solving due to their applicability to dynamic domains. However, so far no comparative studies have been conducted, contrasting the respective modeling capacities and their computational impact. To assess the variety of different forms of ASP solving, we propose Alex Randolph’s board game Ricochet Robots as a transverse benchmark problem that allows us to compare various approaches in a uniform setting. To begin with, we consider alternative ways of encoding ASP planning problems and discuss the underlying modeling techniques. In turn, we conduct an empirical analysis contrasting traditional solving, optimization, incremental, and reactive approaches. In addition, we study the impact of some boosting techniques in the realm of our case study.
Deep Predictive Coding Networks for Video Prediction and Unsupervised Learning
While great strides have been made in using deep learning algorithms to solve supervised learning tasks, the problem of unsupervised learning - leveraging unlabeled examples to learn about the structure of a domain - remains a difficult unsolved challenge. Here, we explore prediction of future frames in a video sequence as an unsupervised learning rule for learning about the structure of the visual world. We describe a predictive neural network ("PredNet") architecture that is inspired by the concept of "predictive coding" from the neuroscience literature. These networks learn to predict future frames in a video sequence, with each layer in the network making local predictions and only forwarding deviations from those predictions to subsequent network layers. We show that these networks are able to robustly learn to predict the movement of synthetic (rendered) objects, and that in doing so, the networks learn internal representations that are useful for decoding latent object parameters (e.g. pose) that support object recognition with fewer training views. We also show that these networks can scale to complex natural image streams (car-mounted camera videos), capturing key aspects of both egocentric movement and the movement of objects in the visual scene, and the representation learned in this setting is useful for estimating the steering angle. These results suggest that prediction represents a powerful framework for unsupervised learning, allowing for implicit learning of object and scene structure.
Single axis solar tracker actuator location analysis
Current trends in energy power generation are leading efforts related to the development of more reliable, sustainable sources and technologies for energy harvesting. Solar energy is one of these renewable energy resources, widely available in nature. Most of the solar panels used today to convert solar energy into chemical energy, and then to electrical energy, are stationary. Energy efficiency studies have shown that more electrical energy can be retrieved from solar panels if they are organized in arrays and then placed on a solar tracker that can then follow the sun as it moves during the day from east to west, and as it moves from north to south during the year, as seasons change. Adding more solar panels to solar tracker structures will improve its yield. It would also add more challenges when it comes to managing the overall weight of such structures, and their strength and reliability under different weather conditions, such as wind, changes in temperature, and atmospheric conditions. Hence, careful structural design and simulation is needed to establish the most optimal parameters in order for solar trackers to withstand all environmental conditions and to function with a high reliability for long periods of time.
BeWell: Sensing Sleep, Physical Activities and Social Interactions to Promote Wellbeing
Smartphone sensing and persuasive feedback design is enabling a new generation of wellbeing apps capable of automatically monitoring multiple aspects of physical and mental health. In this article, we present BeWell+ the next generation of the BeWell smartphone wellbeing app, which monitors user behavior along three health dimensions, namely sleep, physical activity, and social interaction. BeWell promotes improved behavioral patterns via feedback rendered as an ambient display on the smartphone’s wallpaper. With BeWell+, we introduce new mechanisms to address key limitations of the original BeWell app; specifically, (1) community adaptive wellbeing feedback, which generalizes to diverse user communities (e.g., elderly, children) by promoting better behavior yet remains realistic to the user’s lifestyle; and, (2) wellbeing adaptive energy allocation, which prioritizes monitoring fidelity and feedback responsiveness on specific health dimensions (e.g., sleep) where the user needs additional help. N. D. Lane ( ) Microsoft Research Asia, Beijing, China e-mail: [email protected] M. Lin · X. Yang · S. Ali · E. Berke · A. T. Campbell Dartmouth College, Hanover, NH 03755, USA H. Lu Intel Lab, Santa Clara, CA 95054, USA G. Cardone University of Bologna, Bologna, Italy A. Doryab Carnegie Mellon University, Pittsburgh, PA 15213, USA M. Mohammod · T. Choudhury Cornell University, Ithaca, NY 14850, USA We evaluate BeWell+ with a 27 person, 19 day field trial. Our findings show that not only can BeWell+ operate successfully on consumer smartphones; but also users understand feedback and respond by taking steps towards leading healthier lifestyles.
Novel four-drug salvage treatment regimens after failure of a human immunodeficiency virus type 1 protease inhibitor-containing regimen: antiviral activity and correlation of baseline phenotypic drug susceptibility with virologic outcome.
Twenty human immunodeficiency virus-infected patients experiencing virologic failure of an indinavir- or ritonavir-containing treatment regimen were evaluated in a prospective, open-label study. Subjects received nelfinavir, saquinavir, abacavir, and either another nucleoside analog (n=10) or nevirapine (n=10). Patients treated with the nevirapine-containing regimen experienced significantly greater virologic suppression at week 24 than those not treated with nevirapine (P=.04). Baseline phenotypic drug susceptibility was strongly correlated with outcome in both treatment arms. Subjects with baseline virus phenotypically sensitive to 2 or 3 drugs in the salvage regimen experienced significantly greater virus load suppression than those with baseline virus sensitive to 0 or 1 drug (median week-24 change=-2.24 log and -0.35 log, respectively; P=.01). In conclusion, non-nucleoside reverse transcriptase inhibitors may represent a potent drug in salvage therapy regimens after failure of an indinavir or ritonavir regimen. Phenotypic resistance testing may provide a useful tool for selecting more effective salvage regimens.
Infectious Disease and the Demography of the Atlantic Peoples
the Atlantic basin based on a new reading of the documents and data of their story during the last half millennium. I am not the first to propose such a version1 (being a member of the second generation to have taken it up), but it is new to most people out side the ranks of Americanists, anthropologists, and a few varie ties of historians. This version has only begun to appear in college textbooks, and it is still absent from high school textbooks as far as I know. One of the chief reasons for the slowness of its advance into popular perception is that it focuses not on politics and war, which many still think of as real history, but on demography and epidemiology, which many would prefer not to think about at all. It focuses on deadly disease, and on how most of us who are now living in the Americas are doing so because our ancestors were either attracted or dragged across the Atlantic to fill vacancies opened up by disease. This is not a particularly ennobling story, and a lot of people believe history should ennoble or be forgotten.
Collection statistics for fast duplicate document detection
We present a new algorithm for duplicate document detection thatuses collection statistics. We compare our approach with thestate-of-the-art approach using multiple collections. Thesecollections include a 30 MB 18,577 web document collectiondeveloped by Excite@Home and three NIST collections. The first NISTcollection consists of 100 MB 18,232 LA-Times documents, which isroughly similar in the number of documents to theExcite&at;Home collection. The other two collections are both 2GB and are the 247,491-web document collection and the TREC disks 4and 5---528,023 document collection. We show that our approachcalled I-Match, scales in terms of the number of documents andworks well for documents of all sizes. We compared our solution tothe state of the art and found that in addition to improvedaccuracy of detection, our approach executed in roughly one-fifththe time.
Developing Web-based assessment strategies for facilitating junior high school students to perform self-regulated learning in an e-Learning environment
This research refers to the self-regulated learning strategies proposed by Pintrich (1999) in developing a multiple-choice Web-based assessment system, the Peer-Driven Assessment Module of the Web-based Assessment and Test Analysis system (PDA-WATA). The major purpose of PDA-WATA is to facilitate learner use of self-regulatory learning behaviors to perform self-regulated learning and in turn improve e-Learning effectiveness. PDA-WATA includes five main strategies: ‘Adding Answer Notes,’ ‘Stating Confidence,’ ‘Reading Peer Answer Notes,’ ‘Recommending Peer Answer Notes’ and ‘Querying Peers’ Recommendation on Personal Answer Notes’. Using these strategies, examinees are allowed to add answer notes to explain why they chose a certain option as the correct answer and state their confidence in their own answer and answer notes, for peers’ reference. In addition to reading peer answer notes, examinees can also recommend peer answer notes as valuable references. The recommendation information can also be queried by all examinees. Quasi-experimental design was adopted to understand the effectiveness of PDA-WATA in facilitating learner use of self-regulatory learning behaviors to perform self-regulated learning and in improving learner e-Learning effectiveness. Participants were 123 seventhgrade junior high school students from four classes. These four classes were randomly divided into the PDA-WATA group (n1⁄4 63) and the N-WBT group (n1⁄4 60). Before e-Learning instruction, all students took the pre-test of the Learning Process Inventory (LPI), used to understand how often learners use selfregulatory learning behaviors in the learning process, and the pre-test of the summative assessment. After a two-week e-Learning instruction, the students all took the post-test of the LPI and the summative assessment. Results indicate that students in the PDA-WATA group appear to be more willing to take the Web-based formative assessment than students in the N-WBT group. In addition, PDA-WATA appears to be significantly more effective than N-WBT in facilitating learner use of self-regulatory learning behaviors to perform self-regulated learning and in improving their e-Learning effectiveness. Moreover, this research also finds that in the PDA-WATA group, there is no significant difference between the learning effectiveness of students with a low level of self-regulated learning and students with a high level of self-regulated learning, but similar result cannot be found in the N-WBT group. 2011 Elsevier Ltd. All rights reserved.
Liquid crystal polymer (LCP) for microwave/millimeter wave multilayer packaging
This paper presents characterization and analysis of liquid crystal polymer materials for microwave and millimeter multi-layer packaging. Processing techniques have been developed to fabricate interconnects on this new LCP material. The experimental results demonstrate that an interconnect on LCP achieves a measured insertion loss of less than 0.1 dB/mm up to 50 GHz. This material is highly suitable for microwave and millimeter wave packaging.
Economic Aspects of Peer Support Groups for Psychosis
Peer support groups are rarely available for patients with psychosis, despite potential clinical and economic advantages of such groups. In this study, 106 patients with psychosis were randomly allocated to minimally guided peer support in addition to care as usual (CAU), or CAU only. No relevant differences between mean total costs of both groups were found, nor were there significant differences in WHOQoL-Bref outcomes. Intervention adherence had a substantial impact on the results. It was concluded that minimally guided peer support groups for psychosis do not seem to affect overall healthcare expenses. Positive results of additional outcomes, including a significant increase in social contacts and esteem support, favour the wider implementation of such groups.
Oral topiramate reduces the consequences of drinking and improves the quality of life of alcohol-dependent individuals: a randomized controlled trial.
BACKGROUND Topiramate, a fructopyranose derivative, was superior to placebo at improving the drinking outcomes of alcohol-dependent individuals. OBJECTIVES To determine whether topiramate, compared with placebo, improves psychosocial functioning in alcohol-dependent individuals and to discover how this improvement is related to heavy drinking behavior. DESIGN Double-blind, randomized, controlled, 12-week clinical trial comparing topiramate vs placebo for treating alcohol dependence (1998-2001). PARTICIPANTS One hundred fifty alcohol-dependent individuals, diagnosed using the DSM-IV. INTERVENTIONS Seventy-five participants received topiramate (escalating dose of 25 mg/d to 300 mg/d), and 75 had placebo and weekly standardized medication compliance management. MAIN OUTCOME MEASURES Three elements of psychosocial functioning were measured: clinical ratings of overall well-being and alcohol-dependence severity, quality of life, and harmful drinking consequences. Overall well-being and dependence severity and quality of life were analyzed as binary responses with a generalized estimating equation approach; harmful drinking consequences were analyzed as a continuous response using a mixed-effects, repeated-measures model. RESULTS Averaged over the course of double-blind treatment, topiramate, compared with placebo, improved the odds of overall well-being (odds ratio [OR] = 2.17; 95% confidence interval [CI], 1.16-2.60; P =.01); reported abstinence and not seeking alcohol (OR = 2.63; 95% CI, 1.52-4.53; P =.001); overall life satisfaction (OR = 2.28; 95% CI, 1.21-4.29; P =.01); and reduced harmful drinking consequences (OR = -0.07; 95% CI, -0.12 to -0.02, P =.01). There was a significant shift from higher to lower drinking quartiles on percentage of heavy drinking days, which was associated with improvements on all measures of psychosocial functioning. CONCLUSIONS As an adjunct to medication compliance enhancement treatment, topiramate (up to 300 mg/d) was superior to placebo at not only improving drinking outcomes but increasing overall well-being and quality of life and lessening dependence severity and its harmful consequences.
Receptive fields of single neurones in the cat's striate cortex.
In the central nervous system the visual pathway from retina to striate cortex provides an opportunity to observe and compare single unit responses at several distinct levels. Patterns of light stimuli most effective in influencing units at one level may no longer be the most effective at the next. From differences in responses at successive stages in the pathway one may hope to gain some understanding of the part each stage plays in visual perception. By shining small spots of light on the light-adapted cat retina Kuffler (1953) showed that ganglion cells have concentric receptive fields, with an 'on' centre and an 'off ' periphery, or vice versa. The 'on' and 'off' areas within a receptive field were found to be mutually antagonistic, and a spot restricted to the centre of the field was more effective than one covering the whole receptive field (Barlow, FitzHugh & Kuffler, 1957). In the freely moving lightadapted cat it was found that the great majority of cortical cells studied gave little or no response to light stimuli covering most of the animal's visual field, whereas small spots shone in a restricted retinal region often evoked brisk responses (Hubel, 1959). A moving spot of light often produced stronger responses than a stationary one, and sometimes a moving spot gave more activation for one direction than for the opposite. The present investigation, made in acute preparations, includes a study of receptive fields of cells in the cat's striate cortex. Receptive fields of the cells considered in this paper were divided into separate excitatory and inhibitory ('on' and 'off') areas. In this respect they resembled retinal ganglion-cell receptive fields. However, the shape and arrangement of excitatory and inhibitory areas differed strikingly from the concentric pattern found in retinal ganglion cells. An attempt was made to correlate responses to moving stimuli
Cognitive Modeling: Knowledge, Reasoning and Planning for Intelligent Characters
Recent work in behavioral animation has taken impressive st eps toward autonomous, self-animating characters for use in prod uction animation and interactive games. It remains difficult, howe ver, to direct autonomous characters to perform specific tasks. This paper addresses the challenge by introducing co nitive modeling . Cognitive models go beyond behavioral models in that they govern w hat a character knows, how that knowledge is acquired, and how it can be used to plan actions. To help build cognitive models, we de velop the cognitive modeling language CML. Using CML, we can imbue a character with domain knowledge, elegantly specifie d n terms of actions, their preconditions and their effects, an d then direct the character’s behavior in terms of goals. Our approac h allows behaviors to be specified more naturally and intuitively, mo re succinctly and at a much higher level of abstraction than would o therwise be possible. With cognitively empowered characters , the animator need only specify a behavior outline or “sketch pla n” and, through reasoning, the character will automatically work o ut a detailed sequence of actions satisfying the specification. We exploit interval methods to integrate sensing into our underlying t heoretical framework, thus enabling our autonomous characters to gene rate action plans even in highly complex, dynamic virtual worlds . We demonstrate cognitive modeling applications in advanced c haracter animation and automated cinematography.
Phase 2 study of a high dose of 186Re-HEDP for bone pain palliation in patients with widespread skeletal metastases.
UNLABELLED (186)Re-1-hydroxyethylidene-1,1-diphosphonate (HEDP) is an attractive radiopharmaceutical for the treatment of bone pain arising from skeletal metastatic lesions. Currently, (186)Re-HEDP is most commonly used in European countries. The aim of this study was to investigate the palliative efficacy and adverse effects of (186)Re-HEDP in patients with different types of cancers and skeletal bone pain. METHODS Nineteen (8 male, 11 female) patients with various cancers (breast, prostate, renal cell carcinoma, colon, and neuroendocrine tumors) and painful bone metastases were included in the study. A dose of 1,480-3,330 MBq (40-90 mCi) of (186)Re-HEDP was administered intravenously. The patients' level of pain relief was assessed by the Visual Analog Scale for 8 wk after treatment and by a weekly blood cell count to evaluate for hematologic toxicity. RESULTS The overall response rate was 89.5%, and the mean pain score assessed by the Visual Analog Scale was reduced from 9.1 to 5.3 after 1 wk (P = 0.003). No adverse effects were reported by patients during intravenous administration or for up to 24 h after administration. A flare reaction was seen in 63.2% of patients, mainly during days 1-3, and lasted for 2-4 d. There was no significant correlation between the response to therapy and the flare reactions (P > 0.05). The nadir of platelet reduction occurred at the fourth or fifth week and led to platelet infusion in only 4 patients with a low baseline platelet count and diffuse skeletal metastases. Bone marrow suppression occurred in patients receiving higher doses, but no clinical problems were seen except in 2 patients who required packed cell transfusion similar to their prior transfusions. CONCLUSION (186)Re-HEDP is an effective radiopharmaceutical for the palliative treatment of metastatic bone pain and has minimal adverse effects.
Building temperature control with adaptive feedforward
A common approach to the modeling of temperature evolution in a multi-zone building is to use thermal resistance and capacitance to model zone and wall dynamics. The resulting thermal network may be represented as an undirected graph. The thermal capacitances are the nodes in the graph, connected by thermal resistances as links. The temperature measurements and temperature control elements (heating and cooling) in this lumped model are collocated. As a result, the input/output system is strictly passive and any passive output feedback controller may be used to improve the transient and steady state performance without affecting the closed loop stability. The storage functions associated with passive systems may be used to construct a Lyapunov function, to demonstrate closed loop stability and motivate the construction of an adaptive feedforward control to compensate for the variation of the ambient temperature and zone heat loads (due to changing occupancy). The approach lends itself naturally to an inner-outer loop control architecture where the inner loop is designed for stability, while the outer loop balances between temperature specification and power consumption. Energy efficiency consideration may be added by adjusting the target zone temperature based on user preference and energy usage. The initial analysis uses zone heating/cooling as input, but the approach may be extended to more general model where the zonal mass flow rate is the control variable. A four-room example with realistic ambient temperature variation is included to illustrate the performance of the proposed passivity based control strategy.
Powerful and consistent analysis of likert-type ratingscales
Likert-type scales are used extensively during usability evaluations, and more generally evaluations of interactive experiences, to obtain quantified data regarding attitudes, behaviors, and judgments of participants. Very often this data is analyzed using parametric statistics like the Student t-test or ANOVAs. These methods are chosen to ensure higher statistical power of the test (which is necessary in this field of research and practice where sample sizes are often small), or because of the lack of software to handle multi-factorial designs nonparametrically. With this paper we present to the HCI audience new developments from the field of medical statistics that enable analyzing multiple factor designs nonparametrically. We demonstrate the necessity of this approach by showing the errors in the parametric treatment of nonparametric data in experiments of the size typically reported in HCI research. We also provide a practical resource for researchers and practitioners who wish to use these new methods.
A new grouping genetic algorithm approach to the multiple traveling salesperson problem
The multiple traveling salesperson problem (MTSP) is an extension of the well known traveling salesperson problem (TSP). Given m > 1 salespersons and n > m cities to visit, the MTSP seeks a partition of cities into m groups as well as an ordering among cities in each group so that each group of cities is visited by exactly one salesperson in their specified order in such a way that each city is visited exactly once and sum of total distance traveled by all the salespersons is minimized. Apart from the objective of minimizing the total distance traveled by all the salespersons, we have also considered an alternate objective of minimizing the maximum distance traveled by any one salesperson, which is related with balancing the workload among salespersons. In this paper, we have proposed a new grouping genetic algorithm based approach for the MTSP and compared our results with other approaches available in the literature. Our approach outperformed the other approaches on both the objectives.
Improving relevance judgment of web search results with image excerpts
Current web search engines return result pages containing mostly text summary even though the matched web pages may contain informative pictures. A text excerpt (i.e. snippet) is generated by selecting keywords around the matched query terms for each returned page to provide context for user's relevance judgment. However, in many scenarios, we found that the pictures in web pages, if selected properly, could be added into search result pages and provide richer contextual description because a picture is worth a thousand words. Such new summary is named as image excerpts. By well designed user study, we demonstrate image excerpts can help users make much quicker relevance judgment of search results for a wide range of query types. To implement this idea, we propose a practicable approach to automatically generate image excerpts in the result pages by considering the dominance of each picture in each web page and the relevance of the picture to the query. We also outline an efficient way to incorporate image excerpts in web search engines. Web search engines can adopt our approach by slightly modifying their index and inserting a few low cost operations in their workflow. Our experiments on a large web dataset indicate the performance of the proposed approach is very promising.
Task-Based Robot Grasp Planning Using Probabilistic Inference
Grasping and manipulating everyday objects in a goal-directed manner is an important ability of a service robot. The robot needs to reason about task requirements and ground these in the sensorimotor information. Grasping and interaction with objects are challenging in real-world scenarios, where sensorimotor uncertainty is prevalent. This paper presents a probabilistic framework for the representation and modeling of robot-grasping tasks. The framework consists of Gaussian mixture models for generic data discretization, and discrete Bayesian networks for encoding the probabilistic relations among various task-relevant variables, including object and action features as well as task constraints. We evaluate the framework using a grasp database generated in a simulated environment including a human and two robot hand models. The generative modeling approach allows the prediction of grasping tasks given uncertain sensory data, as well as object and grasp selection in a task-oriented manner. Furthermore, the graphical model framework provides insights into dependencies between variables and features relevant for object grasping.
A summary-statistic representation in peripheral vision explains visual crowding.
Peripheral vision provides a less faithful representation of the visual input than foveal vision. Nonetheless, we can gain a lot of information about the world from our peripheral vision, for example in order to plan eye movements. The phenomenon of crowding shows that the reduction of information available in the periphery is not merely the result of reduced resolution. Crowding refers to visual phenomena in which identification of a target stimulus is significantly impaired by the presence of nearby stimuli, or flankers. What information is available in the periphery? We propose that the visual system locally represents peripheral stimuli by the joint statistics of responses of cells sensitive to different position, phase, orientation, and scale. This "textural" representation by summary statistics predicts the subjective "jumble" of features often associated with crowding. We show that the difficulty of performing an identification task within a single pooling region using this representation of the stimuli is correlated with peripheral identification performance under conditions of crowding. Furthermore, for a simple stimulus with no flankers, this representation can be adequate to specify the stimulus with some position invariance. This provides evidence that a unified neuronal mechanism may underlie peripheral vision, ordinary pattern recognition in central vision, and texture perception. A key component of our methodology involves creating visualizations of the information available in the summary statistics of a stimulus. We call these visualizations "mongrels" and show that they are highly useful in examining how the early visual system represents the visual input. Mongrels enable one to study the "equivalence classes" of our model, i.e., the sets of stimuli that map to the same representation according to the model.
Synchronization and Frequency Regulation of DFIG-Based Wind Turbine Generators With Synchronized Control
Synchronized control (SYNC) is widely adopted for doubly fed induction generator (DFIG)-based wind turbine generators (WTGs) in microgrids and weak grids, which applies P-f droop control to achieve grid synchronization instead of phase-locked loop. The DFIG-based WTG with SYNC will reach a new equilibrium of rotor speed under frequency deviation, resulting in the WTG's acceleration or deceleration. The acceleration/deceleration process can utilize the kinetic energy stored in the rotating mass of WTG to provide active power support for the power grid, but the WTG may lose synchronous stability simultaneously. This stability problem occurs when the equilibrium of rotor speed is lost and the rotor speed exceeds the admissible range during the frequency deviations, which will be particularly analyzed in this paper. It is demonstrated that the synchronous stability can be improved by increasing the P-f droop coefficient. However, increasing the P-f droop coefficient will deteriorate the system's small signal stability. To address this contradiction, a modified synchronized control strategy is proposed. Simulation results verify the effectiveness of the analysis and the proposed control strategy.
Is physics-based liveness detection truly possible with a single image?
Face recognition is an increasingly popular method for user authentication. However, face recognition is susceptible to playback attacks. Therefore, a reliable way to detect malicious attacks is crucial to the robustness of the system. We propose and validate a novel physics-based method to detect images recaptured from printed material using only a single image. Micro-textures present in printed paper manifest themselves in the specular component of the image. Features extracted from this component allows a linear SVM classifier to achieve 2.2% False Acceptance Rate and 13% False Rejection Rate (6.7% Equal Error Rate). We also show that the classifier can be generalizable to contrast enhanced recaptured images and LCD screen recaptured images without re-training, demonstrating the robustness of our approach.1
Accurate positioning in ultra-wideband systems
Accurate positioning systems can be realized via ultra-wideband signals due to their high time resolution. In this article, position estimation is studied for UWB systems. After a brief introduction to UWB signals and their positioning applications, two-step positioning systems are investigated from a UWB perspective. It is observed that time-based positioning is well suited for UWB systems. Then time-based UWB ranging is studied in detail, and the main challenges, theoretical limits, and range estimation algorithms are presented. Performance of some practical time-based ranging algorithms is investigated and compared against the maximum likelihood estimator and the theoretical limits. The trade-off between complexity and accuracy is observed.
Preventing Shoulder-Surfing Attack with the Concept of Concealing the Password Objects' Information
Traditionally, picture-based password systems employ password objects (pictures/icons/symbols) as input during an authentication session, thus making them vulnerable to "shoulder-surfing" attack because the visual interface by function is easily observed by others. Recent software-based approaches attempt to minimize this threat by requiring users to enter their passwords indirectly by performing certain mental tasks to derive the indirect password, thus concealing the user's actual password. However, weaknesses in the positioning of distracter and password objects introduce usability and security issues. In this paper, a new method, which conceals information about the password objects as much as possible, is proposed. Besides concealing the password objects and the number of password objects, the proposed method allows both password and distracter objects to be used as the challenge set's input. The correctly entered password appears to be random and can only be derived with the knowledge of the full set of password objects. Therefore, it would be difficult for a shoulder-surfing adversary to identify the user's actual password. Simulation results indicate that the correct input object and its location are random for each challenge set, thus preventing frequency of occurrence analysis attack. User study results show that the proposed method is able to prevent shoulder-surfing attack.
Integrated ultra-high impedance front-end for non-contact biopotential sensing
Non-contact ECG/EEG electrodes, which operate primarily through capacitive coupling, have been extensively studied for unobtrusive physiological monitoring. Previous implementations using discrete off-the-shelf amplifiers have been encumbered by the need for manually tuned input capacitance neutralization networks and complex DC-biasing schemes. We have designed and fabricated a custom integrated high input impedance (60 fF∥5TΩ), low noise (0.05 fA/√Hz) non-contact sensor front-end specifically to bypass these limitations. The amplifier fully bootstraps both the internal and external parasitic impedances to achieve an input capacitance of 60 fF without neutralization. To ensure DC stability and eliminate the need for external large valued resistances, a low-leakage, on-chip biasing network is included. Stable frequency response is demonstrated below 0.05 Hz even with coupling capacitances as low as 0.5 pF.
Widely Used but also Highly Valued? Acceptance Factors and Their Perceptions in Water-Scrum-Fall Projects
Agile methodologies like Scrum propose drastic changes with respect to team hierarchies, organizational structures, planning or controlling processes. To mitigate the level of change and retain some established processes, many organizations prefer to introduce hybrid agile-traditional methodologies that combine agile with traditional development practices. Despite their importance in practice, only a few studies have examined the acceptance of such methodologies, however. In this paper, we present the results of a qualitative study that was conducted at a Swiss bank. It uses Water-Scrum-Fall, which combines Scrum with traditional practices. Based on the Diffusion of Innovations theory, we discuss several acceptance factors and investigate how they are perceived. The results indicate that, compared to traditional development methodologies, some aspects of Water-Scrum-Fall bring relative advantages and are more compatible to the way developers prefer to work. Yet, there also exist potential acceptance barriers such as a restricted individual autonomy and increased process complexity.
Online Social Networking and Learning: What are the Interesting Research Questions?
This article introduces a youth-initiated practice: online social networking that is transforming our society in important ways and has vast implications for research concerning online behavior, the social and psychological aspects of online learning and the institution of education. In this paper, the author introduces the socio-technical features that characterize social networking systems and outlines results from preliminary research that suggests the informal social and intellectual practices in which participants naturally engage and how these relate to the competencies increasingly valued in formal education. The paper outlines four overlapping categories for research such as activities and outcomes, tool, place, and medium, identity and communication, and network analytics and methods. Within these categories the author outlines interesting research questions to pursue in documenting and interpreting the complexity of ‘learning’ within these spaces. Goals are to catalyze inquiry that bridges informal and formal learning and stimulate interdisciplinary conversation about where such agendas fit within and advance learning research.
Towards Music Imagery Information Retrieval: Introducing the OpenMIIR Dataset of EEG Recordings from Music Perception and Imagination
Music imagery information retrieval (MIIR) systems may one day be able to recognize a song from only our thoughts. As a step towards such technology, we are presenting a public domain dataset of electroencephalography (EEG) recordings taken during music perception and imagination. We acquired this data during an ongoing study that so far comprises 10 subjects listening to and imagining 12 short music fragments – each 7–16s long – taken from well-known pieces. These stimuli were selected from different genres and systematically vary along musical dimensions such as meter, tempo and the presence of lyrics. This way, various retrieval scenarios can be addressed and the success of classifying based on specific dimensions can be tested. The dataset is aimed to enable music information retrieval researchers interested in these new MIIR challenges to easily test and adapt their existing approaches for music analysis like fingerprinting, beat tracking, or tempo estimation on EEG data.
The Firm ’ s Management of Social Interactions
Consumer choice is influenced in a direct and meaningful way by the actions taken by others. These “actions” range from face-to-face recommendations from a friend to the passive observation of what a stranger is wearing. We refer to the set of such contexts as “social interactions” (SI). We believe that at least some of the SI effects are partially within the firm’s control and that this represents an exciting research opportunity. We present an agenda that identifies a list of unanswered questions of potential interest to both researchers and managers. In order to appreciate the firm’s choices with respect to its management of SI, it is important to first evaluate where we are in terms of understanding the phenomena themselves. We highlight five questions in this regard: (1) What are the antecedents of word of mouth (WOM)? (2) How does the transmission of positive WOM differ from that of negative WOM? (3) How does online WOM differ from offline WOM? (4) What is the impact of WOM? (5) How can we measure WOM? Finally, we identify and discuss four principal, non-mutually exclusive, roles that the firm might play: (1) observer, (2) moderator, (3) mediator, and (4) participant.
Longitudinal screening algorithm that incorporates change over time in CA125 levels identifies ovarian cancer earlier than a single-threshold rule.
PURPOSE Longitudinal algorithms incorporate change over time in biomarker levels to individualize screening decision rules. Compared with a single-threshold (ST) rule, smaller deviations from baseline biomarker levels are required to signal disease. We demonstrated improvement in ovarian cancer early detection by using a longitudinal algorithm to monitor annual CA125 levels. PATIENTS AND METHODS We retrospectively evaluated serial preclinical serum CA125 values measured annually in 44 incident ovarian cancer cases identified from participants in the PLCO (Prostate Lung Colorectal and Ovarian) Cancer Screening Trial to determine how frequently and to what extent the parametric empirical Bayes (PEB) longitudinal screening algorithm identifies ovarian cancer earlier than an ST rule. RESULTS The PEB algorithm detected ovarian cancer earlier than an ST rule in a substantial proportion of cases. At 99% specificity, which corresponded to the ST-rule CA125 cutoff ≥ 35 U/mL that was used in the PLCO trial, 20% of cases were identified earlier by using the PEB algorithm. Among these cases, the PEB signaled abnormal CA125 values, on average, 10 months earlier and at a CA125 concentration 42% lower (20 U/mL) than the ST-rule cutoff. The proportion of cases detected earlier by the PEB algorithm and the earliness of detection increased as the specificity of the screening rule was reduced. CONCLUSION The PEB longitudinal algorithm identifies ovarian cancer earlier and at lower biomarker concentrations than an ST screening algorithm adjusted to the same specificity. Longitudinal biomarker assessment by using the PEB algorithm may have application for screening other solid tumors in which biomarkers are available.
Practical Visual Localization for Autonomous Driving: Why Not Filter?
A major focus of current research on place recognition is visual localization for autonomous driving. However, while many visual localization algorithms for autonomous driving have achieved impressive results, it seems not all previous works have been set in a realistic setting for the problem, namely using training and testing videos that were collected in a distributed manner from multiple vehicles, all traversing through a road network in an urban area under different environmental conditions (weather, lighting, etc.). More importantly, in this setting, we show that exploiting temporal continuity in the testing sequence significantly improves visual localization qualitatively and quantitatively. Although intuitive, this idea has not been fully explored in recent works. Our main contribution is a novel particle filtering technique that works in conjunction with a visual localization method to achieve accurate city-scale localization that is robust against environmental variations. We provide convincing results on synthetic and real datasets.
Automatic Semantic Face Recognition Nawaf
Recent expansion in surveillance systems has motivated research in soft biometrics that enable the unconstrained recognition of human faces. Comparative soft biometrics show superior recognition performance than categorical soft biometrics and have been the focus of several studies which have highlighted their ability for recognition and retrieval in constrained and unconstrained environments. These studies, however, only addressed face recognition for retrieval using human generated attributes, posing a question about the feasibility of automatically generating comparative labels from facial images. In this paper, we propose an approach for the automatic comparative labelling of facial soft biometrics. Furthermore, we investigate unconstrained human face recognition using these comparative soft biometrics in a human labelled gallery (and vice versa). Using a subset from the LFW dataset, our experiments show the efficacy of the automatic generation of comparative facial labels, highlighting the potential extensibility of the approach to other face recognition scenarios and larger ranges of attributes.
Session-based item recommendation in e-commerce: on short-term intents, reminders, trends and discounts
Many e-commerce sites present additional item recommendations to their visitors while they navigate the site, and ample evidence exists that such recommendations are valuable for both customers and providers. Academic research often focuses on the capability of recommender systems to help users discover items they presumably do not know yet and which match their long-term preference profiles. In reality, however, recommendations can be helpful for customers also for other reasons, for example, when they remind them of items they were recently interested in or when they point site visitors to items that are currently discounted. In this work, we first adopt a systematic statistical approach to analyze what makes recommendations effective in practice and then propose ways of operationalizing these insights into novel recommendation algorithms. Our data analysis is based on log data of a large e-commerce site. It shows that various factors should be considered in parallel when selecting items for recommendation, including their match with the customer’s shopping interests in the previous sessions, the general popularity of the items in the last few days, as well as information about discounts. Based on these analyses we propose a novel algorithm that combines a neighborhood-based scheme with a deep neural network to predict the relevance of items for a given shopping session.
Evaluation of the performance of a novel system for continuous glucose monitoring.
BACKGROUND The performance of a continuous glucose monitoring (CGM) system in the early stage of development was assessed in an inpatient setting that simulates daily life conditions of people with diabetes. Performance was evaluated at low glycemic, euglycemic, and high glycemic ranges as well as during phases with rapid glucose excursions. METHODS Each of the 30 participants with type 1 diabetes (15 female, age 47 ± 12 years, hemoglobin A1c 7.7% ± 1.3%) wore two sensors of the prototype system in parallel for 7 days. Capillary blood samples were measured at least 16 times per day (at least 15 times per daytime and at least once per night). On two subsequent study days, glucose excursions were induced. For performance evaluation, the mean absolute relative difference (MARD) between CGM readings and paired capillary blood glucose readings and precision absolute relative difference (PARD), i.e., differences between paired CGM readings were calculated. RESULTS Overall aggregated MARD was 9.2% and overall aggregated PARD was 7.5%. During induced glucose excursions, MARD was 10.9% and PARD was 7.8%. Lowest MARD (8.5%) and lowest PARD (6.4%) were observed in the high glycemic range (euglycemic range, MARD 9.1% and PARD 7.4%; low glycemic range, MARD 12.3% and PARD 12.4%). CONCLUSIONS The performance of this prototype CGM system was, particularly in the hypoglycemic range and during phases with rapid glucose fluctuations, better than performance data reported for other commercially available systems. In addition, performance of this prototype sensor was noticeably constant over the whole study period. This prototype system is not yet approved, and performance of this CGM system needs to be further assessed in clinical studies.
Dynamic question generation system for web-based testing using particle swarm optimization
One aim of testing is to identify weaknesses in students’ knowledge. Computerized tests are now one of the most important ways to judge learning, and, selecting tailored questions for each learner is a significant part of such tests. Therefore, one current trend is that computerized adaptive tests (CATs) not only assist teachers in estimating the learning performance of students, but also facilitate understanding of problems in their learning process. These tests, must effectively and efficiently select questions from a large-scale item bank, and to cope with this problem we propose a dynamic question generation system for web-based tests using the novel approach of particle swarm optimization (PSO). The dynamic question generation system is built to select tailored questions for each learner from the item bank to satisfy multiple assessment requirements. Furthermore, the proposed approach is able to efficiently generate near-optimal questions that satisfy multiple assessment criteria. With a series of experiments, we compare the efficiency and efficacy of the PSO approach with other approaches. The experimental results show that the PSO approach is suitable for the selection of near-optimal questions from large-scale item banks. 2007 Elsevier Ltd. All rights reserved.
Metabolic signals and innate immune activation in obesity and exercise.
The combination of a sedentary lifestyle and excess energy intake has led to an increased prevalence of obesity which constitutes a major risk factor for several co-morbidities including type 2 diabetes and cardiovascular diseases. Intensive research during the last two decades has revealed that a characteristic feature of obesity linking it to insulin resistance is the presence of chronic low-grade inflammation being indicative of activation of the innate immune system. Recent evidence suggests that activation of the innate immune system in the course of obesity is mediated by metabolic signals, such as free fatty acids (FFAs), being elevated in many obese subjects, through activation of pattern recognition receptors thereby leading to stimulation of critical inflammatory signaling cascades, like IκBα kinase/nuclear factor-κB (IKK/NF- κB), endoplasmic reticulum (ER) stress-induced unfolded protein response (UPR) and NOD-like receptor P3 (NLRP3) inflammasome pathway, that interfere with insulin signaling. Exercise is one of the main prescribed interventions in obesity management improving insulin sensitivity and reducing obesity- induced chronic inflammation. This review summarizes current knowledge of the cellular recognition mechanisms for FFAs, the inflammatory signaling pathways triggered by excess FFAs in obesity and the counteractive effects of both acute and chronic exercise on obesity-induced activation of inflammatory signaling pathways. A deeper understanding of the effects of exercise on inflammatory signaling pathways in obesity is useful to optimize preventive and therapeutic strategies to combat the increasing incidence of obesity and its comorbidities.
Competing with Humans at Fantasy Football: Team Formation in Large Partially-Observable Domains
We present the first real-world benchmark for sequentiallyoptimal team formation, working within the framework of a class of online football prediction games known as Fantasy Football. We model the problem as a Bayesian reinforcement learning one, where the action space is exponential in the number of players and where the decision maker’s beliefs are over multiple characteristics of each footballer. We then exploit domain knowledge to construct computationally tractable solution techniques in order to build a competitive automated Fantasy Football manager. Thus, we are able to establish the baseline performance in this domain, even without complete information on footballers’ performances (accessible to human managers), showing that our agent is able to rank at around the top percentile when pitched against 2.5M
Understanding Personality through Social Media
In this paper, we study the relationship between language use on Twitter and personality traits. Specifically, we want to know how various linguistic features correlate with each personality trait and to what extent can we predict personality traits from language. We gather personality data from Myers-Briggs Type Indicator (MBTI) personality test which contains thinking, feeling, sensation, intuition, introversion, extroversion, judging and perceiving. Using the 90K users in our dataset, we collect most recent tweets from them and design three categories of feature, namely bag of n-grams, Twitter POS tags, and word vectors to explore the most related linguistic features for different personality traits. Analysis of these features provide insights of language use for different personalities. For instances, extroverts tend to use hashtag and phrases like ”so proud”, ”so excited”, and ”can’t wait”. People who like to use emoticon are more likely to be Sensing and Feeling personality type. Moreover, we investigate the predictive power of individual features and combined features in our analysis. With the concatenation of all the features we extracted, we can predict the personality traits with an average AUC of 0.661.
Application of Higher Order Spectra for the Identification of Diabetes Retinopathy Stages
Diabetic retinopathy (DR) is a condition where the retina is damaged due to fluid leaking from the blood vessels into the retina. In extreme cases, the patient will become blind. Therefore, early detection of diabetic retinopathy is crucial to prevent blindness. Various image processing techniques have been used to identify the different stages of diabetes retinopathy. The application of non-linear features of the higher-order spectra (HOS) was found to be efficient as it is more suitable for the detection of shapes. The aim of this work is to automatically identify the normal, mild DR, moderate DR, severe DR and prolific DR. The parameters are extracted from the raw images using the HOS techniques and fed to the support vector machine (SVM) classifier. This paper presents classification of five kinds of eye classes using SVM classifier. Our protocol uses, 300 subjects consisting of five different kinds of eye disease conditions. We demonstrate a sensitivity of 82% for the classifier with the specificity of 88%.
Outlooks and Insights on Group Decision and Negotiation
Community governance needs a small group discussion among community people to identify their concerns, to share them each other and to generate better alternatives for solving problems. A planner should manage the discussion to achieve these objectives. This study analyzed the small group discussion in the community disaster risk management by using text mining. Correspondence analysis was applied to the text data of the discussion. Analytical results revealed the characteristics and effects of small group discussion.
Fine-Grained Sentiment Analysis on Financial Microblogs and News Headlines
Sentiment analysis in the financial domain is quickly becoming a prominent research topic as it provides a powerful method to predict market dynamics. In this work, we leverage advances in Semantic Web area to develop a fine-grained approach to predict real-valued sentiment scores. We compare several classifiers trained on two different datasets. The first dataset consists of microblog messages focusing on stock market events, while the second one consists of financially relevant news headlines crawled from different sources on the Internet. We test our approach using several feature sets including lexical features, semantic features and a combination of lexical and semantic features. Experimental results show that the proposed approach allows achieving an accuracy level of more than 72%.
AGEs/sRAGE, a novel risk factor in the pathogenesis of end-stage renal disease
Interaction of advanced glycation end products (AGEs) with its cell-bound receptor (RAGE) results in cell dysfunction through activation of nuclear factor kappa-B, increase in expression and release of inflammatory cytokines, and generation of oxygen radicals. Circulating soluble receptors, soluble receptor (sRAGE), endogenous secretory receptor (esRAGE) and cleaved receptor (cRGAE) act as decoy for RAGE ligands and thus have cytoprotective effects. Low levels of sRAGE and esRAGE have been proposed as biomarkers for many diseases. However sRAGE and esRAGE levels are elevated in diabetes and chronic renal diseases and still tissue injury occurs. It is possible that increases in levels of AGEs are greater than increases in the levels of soluble receptors in these two diseases. Some new parameters have to be used which could be an universal biomarkers for cell dysfunction. It is hypothesized that increases in serum levels of AGEs are greater than the increases in the soluble receptors, and that the levels of AGEs is correlated with soluble receptors and that the ratios of AGEs/sRAGE, AGEs/esRAGE and AGEs/cRAGE are elevated in patients with end-stage renal disease (ESRD) and would serve as an universal risk marker for ESRD. The study subject comprised of 88 patients with ESRD and 20 healthy controls. AGEs, sRAGE and esRAGE were measured using commercially available enzyme linked immune assay kits. cRAGE was calculated by subtracting esRAGE from sRAGE. The data show that the serum levels of AGEs, sRAGE, cRAGE are elevated and that the elevation of AGEs was greater than those of soluble receptors. The ratios of AGEs/sRAGE, AGEs/esRAGE and AGEs/cRAGE were elevated and the elevation was similar in AGEs/sRAGE and AGEs/cRAGE but greater than AGEs/esRAGE. The sensitivity, specificity, accuracy, and positive and negative predictive value of AGEs/sRAGE and AGEs/cRAGE were 86.36 and 84.88 %, 86.36 and 80.95 %, 0.98 and 0.905, 96.2 and 94.8 %, and 61.29 and 56.67 % respectively. There was a positive correlation of sRAGE with esRAGE and cRAGE, and AGEs with esRAGE; and negative correlation between sRAGE and AGEs/sRAGE, esRAGE and AGES/esRAGE, and cRAGE and AGES/cRAGE. In conclusion, AGEs/sRAGE, AGEs/cRAGE and AGEs/esRAGE may serve as universal risk biomarkers for ESRD and that AGEs/sRAGE and AGEs/cRAGE are better risk biomarkers than AGEs/esRAGE.
IT Governance Practices For Improving Strategic And Operational Business-IT Alignment
The literature suggests that business-IT alignment is an important antecedent of IS success, business process performance, and competitive advantage. Additionally, IT governance practices are highlighted as being instrumental to fostering business-IT alignment. In this paper, we derive various IT governance practices (in terms of structures, processes, relational mechanisms, and enterprise architecture characteristics) from literature and expert interviews. While prior investigations only considered the effect of such practices on strategic business-IT alignment, we also incorporate alignment at operational level. Using results from a case study in the IT services division of a large multi-national, multi-divisional company acting in diverse industries we highlight the effect of various IT governance practices and offer new insights by showing which mechanisms are effective in facilitating strategic or operational business-IT alignment. Our results indicate the most important practices for both strategic and operational alignment.
The effects of locomotor training with a robotic-gait orthosis (Lokomat) on neuromuscular properties in persons with chronic SCI
We studied the effects of robotic-assisted locomotor (LOKOMAT) training on neuromuscular abnormality associated with spasticity in persons with incomplete Spinal Cord Injury (SCI). LOKOMAT training was performed 3 days/week for 4 weeks, with up to 45 minutes of training per session. Subjects were evaluated before and after 1, 2, and 4 weeks of training, and the effects of training on the intrinsic (muscular) and reflexive components of the neuromuscular properties were quantified over the ankle range-of-motion. A linear (slope&intercept) regression was fit to the stiffness-angle curve. “Growth mixture” modeling was used to identify recovery classes for these parameters over the training period. Two distinct classes were observed. Class 1 subjects had initially higher reflex stiffness parameters (i.e., intercept and slope vs. ankle position) and reduced significantly over the training period. Class 2 subjects initially had lower reflex stiffness parameters and experienced non-significant reductions. Similar results were observed for the intrinsic stiffness intercept; however, intrinsic slope showed no significant improvement over training for either class. These findings demonstrate that LOKOMAT training is effective in reducing reflex and intrinsic stiffness (which abnormally increase in SCI) and improving the abnormal modulation of reflexes over the ankle range-of-motion.
Forward and inverse kinematics model for robotic welding process using KR-16KS KUKA robot
This paper aims to model the forward and inverse kinematics of a KUKA KR-16KS robotic arm in the application of a simple welding process. A simple welding task to weld a block onto a metal sheet is carried out in order to investigate the forward and inverse kinematics models of KR-16KS. A movement flow planning is designed and further developed into the KR-16KS programming. Eleven points of movement are studied for the forward kinematic modeling. A summary of calculation is obtained. A general D-H representation of forward and inverse matrix is obtained. This can be used in each of the welding operation movement based on KUKA KR-16KS robotic arm. A forward kinematic and an inverse kinematic aspect of KUKA KR-16KS is successfully modeled based on a simple welding task.
Designing content-centric multi-hop networking over Wi-Fi Direct on smartphones
The peer-to-peer and ad-hoc networking allows content dissemination and sharing among mobile devices without depending on any infrastructure. However, mobile devices still encounter content sharing issues in highly dense networks. Therefore, we develop a new architecture which creates self-organizing ad-hoc network with content centric networking over Wi-Fi Direct on smartphones. It enables nodes to build multiple groups using different channels for providing Wi-Fi Direct based multi-hop communications. The proposed architecture can achieve not only power efficiency for energy constrained devices but content sharing efficiency from content-centric networking properties. Our simulation study using NS-3 with CCNx shows significantly improved performance of the proposed solution.
Multi-electrode array technologies for neuroscience and cardiology.
At present, the prime methodology for studying neuronal circuit-connectivity, physiology and pathology under in vitro or in vivo conditions is by using substrate-integrated microelectrode arrays. Although this methodology permits simultaneous, cell-non-invasive, long-term recordings of extracellular field potentials generated by action potentials, it is 'blind' to subthreshold synaptic potentials generated by single cells. On the other hand, intracellular recordings of the full electrophysiological repertoire (subthreshold synaptic potentials, membrane oscillations and action potentials) are, at present, obtained only by sharp or patch microelectrodes. These, however, are limited to single cells at a time and for short durations. Recently a number of laboratories began to merge the advantages of extracellular microelectrode arrays and intracellular microelectrodes. This Review describes the novel approaches, identifying their strengths and limitations from the point of view of the end users--with the intention to help steer the bioengineering efforts towards the needs of brain-circuit research.
A Combinatorial Noise Model for Quantum Computer Simulation
Quantum computers (QCs) have many potential hardware implementations ranging from solid-state silicon-based structures to electron-spin qubits on liquid helium. However, all QCs must contend with gate infidelity and qubit state decoherence over time. Quantum error correcting codes (QECCs) have been developed to protect program qubit states from such noise. Previously, Monte Carlo noise simulators have been developed to model the effectiveness of QECCs in combating decoherence. The downside to this random sampling approach is that it may take days or weeks to produce enough samples for an accurate measurement. We present an alternative noise modeling approach that performs combinatorial analysis rather than random sampling. This model tracks the progression of the most likely error states of the quantum program through its course of execution. This approach has the potential for enormous speedups versus the previous Monte Carlo methodology. We have found speedups with the combinatorial model on the order of 100X-1,000X over the Monte Carlo approach when analyzing applications utilizing the [[7,1,3]] QECC. The combinatorial noise model has significant memory requirements, and we analyze its scaling properties relative to the size of the quantum program. Due to its speedup, this noise model is a valuable alternative to traditional Monte Carlo simulation.
Singing speaker clustering based on subspace learning in the GMM mean supervector space
In this study, we propose algorithms based on subspace learning in the GMM mean supervector space to improve performance of speaker clustering with speech from both reading and singing. As a speaking style, singing introduces changes in the time-frequency structure of a speaker’s voice. The purpose of this study is to introduce advancements for speech systems such as speech indexing and retrieval which improve robustness to intrinsic variations in speech production. Speaker clustering techniques such as k-means and hierarchical are explored for analysis of acoustic space differences of a corpus consisting of reading and singing of lyrics for each speaker. Furthermore, a distance based on fuzzy c-means membership degrees is proposed to more accurately measure clustering difficulty or speaker confusability. Two categories of subspace learning methods are studied: unsupervised based on LPP, and supervised based on PLDA. Our proposed clustering method based on PLDA is a two stage algorithm: where first, initial clusters are obtained using full dimension supervectors, and next, each cluster is refined in a PLDA subspace resulting in a more speaker dependent representation that is less sensitive to speaking style. It is shown that LPP improves average clustering accuracy by 5.1% absolute versus a hierarchical baseline for a mixture of reading and singing, and PLDA based clustering increases accuracy by 9.6% absolute versus a k-means baseline. The advancements offer novel techniques to improve model formulation for speech applications including speaker ID, audio search, and audio content analysis. 2012 Elsevier B.V. All rights reserved.
Geographic distribution and origin of CFTR mutations in Germany
The geographic distribution and origin of CFTR mutations in Germany was evaluated in 658 three-generation families with cystic fibrosis (CF). Fifty different mutations were detected on 1305 parental CF chromosomes from 22 European countries and overseas. The major mutation ΔF508 was identified on 71.5% of all CF chromosomes, followed by R553X (1.8%), N1303K (1.3%), G542X (1.1%), G551D (0.8%) and R347P (0.8%). According to the grandparents' birthplace, 74% of CF chromosomes had their origin in Germany; the ΔF508 percentage was 77%, 75%, 70% and 62% in northern, southern, western and eastern Germany, respectively. Ten or more mutant alleles in the investigated CF gene pool originated from Austria, the Czech Republic, Poland, Russia, Turkey and the Ukraine. This widespread geographic origin of CFTR mutations in today's Germany reflects the many demographic changes and migrations in Central Europe during the 20th century.
Early predictors of high school mathematics achievement.
Identifying the types of mathematics content knowledge that are most predictive of students' long-term learning is essential for improving both theories of mathematical development and mathematics education. To identify these types of knowledge, we examined long-term predictors of high school students' knowledge of algebra and overall mathematics achievement. Analyses of large, nationally representative, longitudinal data sets from the United States and the United Kingdom revealed that elementary school students' knowledge of fractions and of division uniquely predicts those students' knowledge of algebra and overall mathematics achievement in high school, 5 or 6 years later, even after statistically controlling for other types of mathematical knowledge, general intellectual ability, working memory, and family income and education. Implications of these findings for understanding and improving mathematics learning are discussed.
EXTENDING THE UTAUT MODEL IN M-BANKING ADOPTION
The banks are motivated to leverage their m-banking activities with the advancement of mobile technologies and the high mobile penetration rate. Despite the availability of the mbanking services, many consumers still posses the attitude of ‘wait-and-see’ which resulted to unsatisfactory adoption rate. Hence, this paper aims to extend the Unified Theory of Acceptance and Use of Technology (UTAUT) model to better understand what drives Malaysian consumers to adopt m-banking services. The conceptual model also provide better strategic insights for commercial banks, service developers, mobile manufacturers, and others to yield higher acceptance of m-banking.
A design for an electronically-steerable holographic antenna with polarization control
We present a design for an electronic-steerable holographic antenna with polarization control composed of a radial array of Ku-band, electronically-steerable, surface-wave waveguide (SWG) artificial-impedance-surface antennas (AISA). The antenna operates by launching surface waves into each of the SWGs via a central feed network. The surface-wave impedance is electronically controlled with varactor-tuned impedance patches. The impedance is adjusted to scan the antenna in elevation, azimuth and polarization. The radial symmetry allows for 360° azimuthal steering. If constructed with previously-demonstrated SWG AISAs, it is capable of scanning in elevation from -75° to 75° with gain variation of less than 3 dB. The polarization can be switched among V-Pol, H-Pol, LHCP and RHCP at will.
Multilevel SOT-MRAM cell with a novel sensing scheme for high-density memory applications
This paper presents a multilevel spin-orbit torque magnetic random access memory (SOT-MRAM). The conventional SOT-MRAMs enables a reliable and energy efficient write operation. However, these cells require two access transistors per cell, hence the efficiency of the SOT-MRAMs can be questioned in high-density memory application. To deal with this obstacle, we propose a multilevel cell which stores two bits per memory cell. In addition, we propose a novel sensing scheme to read out the stored data in the multilevel SOT-MRAM cell. Our simulation results show that the proposed cell can achieve 3X more energy efficient write operation in comparison with the conventional STT-MRAMs. In addition, the proposed cell store two bits without any area penalty in comparison to the conventional one bit SOT-MRAM cells.
A challenge of 45 nm extreme low-k chip using Cu pillar bump as 1st interconnection
In this study, Cu pillar bump is firstly built on FCCSP with 65 nm low k chip. 7 DOE cells are designed to evaluate the effects of Cu pillar height, Cu pillar diameter, PI opening size and PI material on package reliability performance. No obvious failure is found after package assembly and long-term reliability test. The packages are still in good shape even though the reliability test is expanded to 3x test durations With the experiences of Cu pillar bump on 65 nm low k chip, Cu pillar bump is again built on FCBGA package with 45 nm ELK chip. White bump defect is found after chip bond via CSAM inspection, failure analysis shows that the white bump phenomenon is due to crack occurs inside ELK layer. A local heating bond tool (thermal compression bond) is used to improve ELK crack, test results illustrate ELK crack still exists, however the failure rate reduces from original 30%~50% to 5%~20%. Simulation analysis is conducted to study the effect of PI opening size and UBM size on stress concentration at ELK layer. Small PI opening size can reduce stress distribution at ELK layer. On the contrary, relatively large PI opening size and large UBM size also show positive effect on ELK crack. Assembly process and reliability test are conducted again to validate simulation results, experiment data is consistent with simulation result.
Intranasal adminsitration of oxytocin in postnatal depression: implications for psychodynamic psychotherapy from a randomized double-blind pilot study
Oxytocin is a neuropeptide that is active in the central nervous system and is generally considered to be involved in prosocial behaviors and feelings. In light of its documented positive effect on maternal behavior, we designed a study to ascertain whether oxytocin exerts any therapeutic effects on depressive symptoms in women affected by maternal postnatal depression. A group of 16 mothers were recruited in a randomized double-blind study: the women agreed to take part in a brief course of psychoanalytic psychotherapy (12 sessions, once a week) while also being administered, during the 12-weeks period, a daily dose of intranasal oxytocin (or a placebo). The pre-treatment evaluation also included a personality assessment of the major primary-process emotional command systems described by Panksepp () and a semi-quantitative assessment by the therapist of the mother's depressive symptoms and of her personality. No significant effect on depressive symptomatology was found following the administration of oxytocin (as compared to a placebo) during the period of psychotherapy. Nevertheless, a personality trait evaluation of the mothers, conducted in our overall sample group, showed a decrease in the narcissistic trait only within the group who took oxytocin. The depressive (dysphoric) trait was in fact significantly affected by psychotherapy (this effect was only present in the placebo group so it may reflect a positive placebo effect enhancing the favorable influence of psychotherapy on depressive symptoms) but not in the presence of oxytocin. Therefore, the neuropeptide would appear to play some role in the modulation of cerebral functions involved in the self-centered (narcissistic) dimension of the suffering that can occur with postnatal depression. Based on these results, there was support for our hypothesis that what is generally defined as postnatal depression may include disturbances of narcissistic affective balance, and oxytocin supplementation can counteract that type of affective disturbance. The resulting improvements in well-being, reflected in better self-centering in post-partuent mothers, may in turn facilitate better interpersonal acceptance of (and interactions with) the child and thereby, improved recognition of the child's needs.
Low-RCS and Polarization-Reconfigurable Antenna Using Cross-Slot-Based Metasurface
This letter presents a polarization-reconfigurable compact slot antenna with reduced radar cross section (RCS) using an asymmetric cross-shaped metasurface (MS). The proposed MS can reconfigure the polarization of the slot antenna between right-hand circular polarization (RHCP), left-hand circular polarization (LHCP), and linear polarization (LP) by rotating it with respect to the center of the slot antenna. In addition, the MS reduces the RCS of the slot antenna significantly in all polarization states. The cross-slot MS is placed just over the planar slot antenna without any air gap. The simulated monostatic RCS of -19.5 dBsm is observed at 4.4 GHz for LHCP and RHCP cases and -17.0 dBsm for LP mode of operation. Antenna performance in terms of its input matching, far-field parameters, monostatic RCS, and axial ratio are measured at its three polarization states, which are in agreement with simulated results.