title
stringlengths
8
300
abstract
stringlengths
0
10k
Can you fool AI with adversarial examples on a visual Turing test?
Deep learning has achieved impressive results in many areas of Computer Vision and Natural Language Processing. Among others, Visual Question Answering (VQA), also referred to a visual Turing test, is considered one of the most compelling problems, and recent deep learning models have reported significant progress in vision and language modeling. Although Artificial Intelligence (AI) is getting closer to passing the visual Turing test, at the same time the existence of adversarial examples to deep learning systems may hinder the practical application of such systems. In this work, we conduct the first extensive study on adversarial examples for VQA systems. In particular, we focus on generating targeted adversarial examples for a VQA system while the target is considered to be a question-answer pair. Our evaluation shows that the success rate of whether a targeted adversarial example can be generated is mostly dependent on the choice of the target question-answer pair, and less on the choice of images to which the question refers. We also report the language prior phenomenon of a VQA model, which can explain why targeted adversarial examples are hard to generate for some question-answer targets. We also demonstrate that a compositional VQA architecture is slightly more resilient to adversarial attacks than a non-compositional one. Our study sheds new light on how to build deep vision and language resilient models robust against adversarial examples.
Jane Austen and Genre: Mansfield Park, Northanger Abbey, and the Triumph of the Realistic Novel
This paper analyzes Jane Austen’s Mansfield Park and Northanger Abbey in terms of genre. In particular, it examines the theatrical in Mansfield Park and the Gothic in Northanger Abbey. The production of Elizabeth Inchbald’s Lovers’ Vows and Catherine’s Gothic novel reading are key to the analysis of these genres. However, the use of subgenres goes far beyond the Bertrams’ production and Catherine’s books. Rather, the characters themselves adopt theatrical and Gothic characteristics throughout the novel. Furthermore, when these subgenres appear, they are presented in a manner that is harmful to the main characters. In this sense, Austen invokes the theatrical and the Gothic in order to underplay them, and in doing so, she validates the emerging realistic novel.
Formation Mechanism and Effect on Petroleum Accumulation of the Weathering Crust, Top of Jurassic, in the Hinterland of Junggar Basin
Clay-layer of the weathering crust, top of Jurassic, in the hinterland of Junggar Basin, is characterized by enriched Al_ 2O_ 3, Fe_ 2O_ 3 and TiO_ 2. The weathering crust can be classified into two types according to maturity index (SiO_ 2/Al_ 2O_ 3). Type Ⅰ shows high maturity, with SiO_ 2/Al_ 2O_ 3 between 2.7~4.0, and type Ⅱ shows low maturity, with SiO_ 2/Al_ 2O_ 3 between 4.0~5.0. Difference of the maturity of weathering crust (MWC) is mainly caused by tectonic environment and geological time. In the ridge area of Che-Mo Paleohigh, MWC is low, as weathering crust developed progressively down into underlying strata during uplift of the Paleohigh in Late Jurassic. In the flank area of the paleohigh and the area unaffected by the paleohigh, MWC is high, as weathered crust developed in relatively stable environment and experienced longer time. MWC is low in the area of Dong 1 well, as weathered time is much shorter caused by the deposit of J_ 3q in Late Jurassic. Truncate-reservoirs can be formed in the ridge of the paleohigh as weathering crust developed upon sandbody of J_ 1s directly. In other area, weathering crust is an important seal for petroleum migration and accumulation.
An Energy-Efficient and Wide-Range Voltage Level Shifter With Dual Current Mirror
This brief presents an energy-efficient level shifter (LS) to convert a subthreshold input signal to an above-threshold output signal. In order to achieve a wide range of conversion, a dual current mirror (CM) structure consisting of a virtual CM and an auxiliary CM is proposed. The circuit has been implemented and optimized in SMIC 40-nm technology. The postlayout simulation demonstrates that the new LS can achieve voltage conversion from 0.2 to 1.1 V. Moreover, at the target voltage of 0.3 V, the proposed LS exhibits an average propagation delay of 66.48 ns, a total energy per transition of 72.31 fJ, and a static power consumption of 88.4 pW, demonstrating an improvement of <inline-formula> <tex-math notation="LaTeX">$6.0\times $ </tex-math></inline-formula>, <inline-formula> <tex-math notation="LaTeX">$13.1\times $ </tex-math></inline-formula>, and <inline-formula> <tex-math notation="LaTeX">$89.0\times $ </tex-math></inline-formula>, respectively, when compared with Wilson CM-based LS.
Segmentation for classification of gastroenterology images
Automatic classification of cancer lesions in tissues observed using gastroenterology imaging is a non-trivial pattern recognition task involving filtering, segmentation, feature extraction and classification. In this paper we measure the impact of a variety of segmentation algorithms (mean shift, normalized cuts, level-sets) on the automatic classification performance of gastric tissue into three classes: cancerous, pre-cancerous and normal. Classification uses a combination of color (hue-saturation histograms) and texture (local binary patterns) features, applied to two distinct imaging modalities: chromoendoscopy and narrow-band imaging. Results show that mean-shift obtains an interesting performance for both scenarios producing low classification degradations (6%), full image classification is highly inaccurate reinforcing the importance of segmentation research for Gastroenterology, and confirm that Patch Index is an interesting measure of the classification potential of small to medium segmented regions.
Do threatening stimuli draw or hold visual attention in subclinical anxiety?
Biases in information processing undoubtedly play an important role in the maintenance of emotion and emotional disorders. In an attentional cueing paradigm, threat words and angry faces had no advantage over positive or neutral words (or faces) in attracting attention to their own location, even for people who were highly state-anxious. In contrast, the presence of threatening cues (words and faces) had a strong impact on the disengagement of attention. When a threat cue was presented and a target subsequently presented in another location, high state-anxious individuals took longer to detect the target relative to when either a positive or a neutral cue was presented. It is concluded that threat-related stimuli affect attentional dwell time and the disengage component of attention, leaving the question of whether threat stimuli affect the shift component of attention open to debate.
Maestro: A System for Scalable OpenFlow Control
The fundamental feature of an OpenFlow network is that the controller is responsible for the initial establishment of every flow by contacting related switches. Thus the performance of the controller could be a bottleneck. This paper shows how this fundamental problem is addressed by parallelism. The state of the art OpenFlow controller, called NOX, achieves a simple programming model for control function development by having a single-threaded event-loop. Yet NOX has not considered exploiting parallelism. We propose Maestro which keeps the simple programming model for programmers, and exploits parallelism in every corner together with additional throughput optimization techniques. We experimentally show that the throughput of Maestro can achieve near linear scalability on an eight core server machine. Keywords-OpenFlow, network management, multithreading, performance optimization
Automatic Assessment of Document Quality in Web Collaborative Digital Libraries
The old dream of a universal repository containing all of human knowledge and culture is becoming possible through the Internet and the Web. Moreover, this is happening with the direct collaborative participation of people. Wikipedia is a great example. It is an enormous repository of information with free access and open edition, created by the community in a collaborative manner. However, this large amount of information, made available democratically and virtually without any control, raises questions about its quality. In this work, we explore a significant number of quality indicators and study their capability to assess the quality of articles from three Web collaborative digital libraries. Furthermore, we explore machine learning techniques to combine these quality indicators into one single assessment. Through experiments, we show that the most important quality indicators are those which are also the easiest to extract, namely, the textual features related to the structure of the article. Moreover, to the best of our knowledge, this work is the first that shows an empirical comparison between Web collaborative digital libraries regarding the task of assessing article quality.
Cost-effectiveness of transcatheter aortic valve replacement compared with surgical aortic valve replacement in high-risk patients with severe aortic stenosis: results of the PARTNER (Placement of Aortic Transcatheter Valves) trial (Cohort A).
OBJECTIVES The aim of this study was to evaluate the cost-effectiveness of transcatheter aortic valve replacement (TAVR) compared with surgical aortic valve replacement (AVR) for patients with severe aortic stenosis and high surgical risk. BACKGROUND TAVR is an alternative to AVR for patients with severe aortic stenosis and high surgical risk. METHODS We performed a formal economic analysis based on cost, quality of life, and survival data collected in the PARTNER A (Placement of Aortic Transcatheter Valves) trial in which patients with severe aortic stenosis and high surgical risk were randomized to TAVR or AVR. Cumulative 12-month costs (assessed from a U.S. societal perspective) and quality-adjusted life-years (QALYs) were compared separately for the transfemoral (TF) and transapical (TA) cohorts. RESULTS Although 12-month costs and QALYs were similar for TAVR and AVR in the overall population, there were important differences when results were stratified by access site. In the TF cohort, total 12-month costs were slightly lower with TAVR and QALYs were slightly higher such that TF-TAVR was economically dominant compared with AVR in the base case and economically attractive (incremental cost-effectiveness ratio <$50,000/QALY) in 70.9% of bootstrap replicates. In the TA cohort, 12-month costs remained substantially higher with TAVR, whereas QALYs tended to be lower such that TA-TAVR was economically dominated by AVR in the base case and economically attractive in only 7.1% of replicates. CONCLUSIONS In the PARTNER trial, TAVR was an economically attractive strategy compared with AVR for patients suitable for TF access. Future studies are necessary to determine whether improved experience and outcomes with TA-TAVR can improve its cost-effectiveness relative to AVR.
Analyzing the impact of weather variables on monthly electricity demand
The electricity industry is significantly affected by weather conditions both in terms of the operation of the network infrastructure and electricity consumption. Following privatization and deregulation, the electricity industry in the U.K. has become fragmented and central planning has largely disappeared. In order to maximize profits, the margin of supply has decreased and the network is being run closer to capacity in certain areas. Careful planning is required to manage future electricity demand within the framework of this leaner electricity network. There is evidence that the climate in the U.K. is changing with a possible 3/spl deg/C average annual temperature increase by 2080. This paper investigates the impact of weather variables on monthly electricity demand in England and Wales. A multiple regression model is developed to forecast monthly electricity demand based on weather variables, gross domestic product, and population growth. The average mean absolute percentage error (MAPE) for the worst model is approximately 2.60% in fitting the monthly electricity demand from 1989 to 1995 and approximately 2.69% in the forecasting over the period 1996 to 2003. This error may reflect the nonlinear dependence of demand on temperature at the hot and cold temperature extremes; however, the inclusion of degree days, enthalpy latent days, and relative humidity in the model improves the demand forecast during the summer months.
Uncovering Four Strategies to Approach Master Data Management
Just recently much Information Systems (IS) research focuses on master data management (MDM) which promises to increase an organization's overall core data quality. Above any doubt, however, MDM initiatives confront organizations with multi-faceted and complex challenges that call for a more strategic approach to MDM. In this paper we introduce a framework for approaching MDM projects that has been developed in the course of a design science research study. The framework distinguishes four major strategies of MDM project initiations all featuring their specific assets and drawbacks. The usefulness of our artifact is illustrated in a short case narrative.
Bioprinting Technology : A Current State-ofthe-Art Review
Bioprinting is an emerging technology for constructing and fabricating artificial tissue and organ constructs. This technology surpasses the traditional scaffold fabrication approach in tissue engineering (TE). Currently, there is a plethora of research being done on bioprinting technology and its potential as a future source for implants and full organ transplantation. This review paper overviews the current state of the art in bioprinting technology, describing the broad range of bioprinters and bioink used in preclinical studies. Distinctions between laser-, extrusion-, and inkjet-based bioprinting technologies along with appropriate and recommended bioinks are discussed. In addition, the current state of the art in bioprinter technology is reviewed with a focus on the commercial point of view. Current challenges and limitations are highlighted, and future directions for next-generation bioprinting technology are also presented. [DOI: 10.1115/1.4028512]
A staged approach with vincristine, adriamycin, and dexamethasone followed by bortezomib, thalidomide, and dexamethasone before autologous hematopoietic stem cell transplantation in the treatment of newly diagnosed multiple myeloma
Bortezomib-based regimens have significant activities in multiple myeloma (MM). In this study, we tested the efficacy of a total therapy with a staged approach where newly diagnosed MM patients received vincristine/adriamycin/dexamethsone (VAD). VAD-sensitive patients (≥75% paraprotein reduction) received autologous hematopoietic stem cell transplantation (auto-HSCT), whereas less VAD-sensitive patients (<75% paraprotein reduction) received bortezomib/thalidomide/dexamethasone (VTD) for further cytoreduction prior to auto-HSCT. On an intention-to-treat analysis, a progressive increase of complete remission (CR) rates was observed, with cumulative CR rates of 48% after HSCT. Seven patients progressed leading to three fatalities, of which two had central nervous system disease. The 3-year overall survival and event-free survival were 75.1% and 48.3%, respectively. Six patients developed oligoclonal reconstitution with new paraproteins. In the absence of anticoagulant prophylaxis, no patients developed deep vein thrombosis. The staged application of VAD+/–VTD/auto-HSCT resulted in an appreciable response rate and promising survivals. Our approach reduced the use of bortezomib without compromising the ultimate CR rate and is of financial significance for less affluent communities.
The importance of clinical experience for mental health nursing - part 1: undergraduate nursing students' attitudes, preparedness and satisfaction.
Government inquiries and workforce data continue to draw attention to the current and impending crisis in mental health nursing. While undergraduate nursing education has been found at least partially responsible for the negative attitudes nursing students tend to hold towards mental health nursing, clinical experience has been identified as a potential strategy in enhancing more positive attitudes. However, research to date has not focused on the impact of clinical experience on specific factors such as attitudes to mental health nursing to people experiencing mental illness and perceived preparedness for the mental health field. This quasi-experimental study measured changes in students' attitudes to the three factors, including satisfaction with clinical experience following a placement in mental health nursing. A questionnaire was administered to undergraduate nursing students on the first and last day of a mental health clinical placement. This, the first of a two-part paper, compares student responses over the two-time periods and describes satisfaction with the clinical experience. The findings suggest that clinical experience in mental health nursing experience can positively influence attitudes, preparedness for practice, and the popularity of mental health nursing. Satisfaction with clinical experience was also high.
What makes things fun to learn? heuristics for designing instructional computer games
In this paper, I will describe my intuitions about what makes computer games fun. More detailed descriptions of the experiments and the theory on which this paper is based are given by Malone (1980a, 1980b). My primary goal here is to provide a set of heuristics or guidelines for designers of instructional computer games. I have articulated and organized common sense principles to spark the creativity of instructional designers (see Banet, 1979, for an unstructured list of similar principles). To demonstrate the usefulness of these principles, I have included several applications to actual or proposed instructional games. Throughout the paper I emphasize games with educational uses, but I focus on what makes the games fun, not on what makes them educational. Though I will not emphasize the point in this paper, these same ideas can be applied to other educational environments and life situations. In a sense, the categories I will describe constitute a general taxonomy of intrinsic motivation—of what makes an activity fun or rewarding for its own sake rather than for the sake of some external reward (See Lepper and Greene, 1979). I think the essential characteristics of good computer games and other intrinsically enjoyable situations can be organized into three categories: challenge, fantasy, and curiosity.
Convolutional neural network based solar photovoltaic panel detection in satellite photos
The aim of this work is the detection of solar photovoltaic panels in low-quality satellite photos. It is important to receive the geospatial data (such as country, zip code, street and home number) of installed solar panels, because they are connected directly to the local power. It will be helpful to estimate a power capacity and an energy production using the satellite photos. For this purpose, a Convolutional Neural Network was used. For training and testing dataset consists of 3347 low-quality Google satellite images was used. The experimental results show a high rate accuracy of detection with low rate incorrect classifications of the proposed approach. The proposed approach has enormous implementation and can be improved in future.
GPCAD: a tool for CMOS op-amp synthesis
We present a method for optimizing and automating component and transistor sizing for CMOS operational amplifiers. We observe that a wide variety of performance measures can be formulated as posynomial functions of the design variables. As a result, amplifier design problems can be formulated as a geometric program, a special type of convex optimization problem for which very efficient global optimization methods have recently been developed. The synthesis method is therefore fast, and determines the globally optimal design; in particular the final solution is completely independent of the starting point (which can even be infeasible), and infeasible specifications are unambiguously detected. After briefly introducing the method, which is described in more detail by M. Hershenson et al., we show how the method can be applied to six common op-amp architectures, and give several example designs.
Meningiomas of the velum interpositum: surgical considerations.
Meningiomas of the third ventricle are a rare subtype of pineal region tumor that arise from the posterior portion of the velum interpositum, the double layer of pia mater that forms the roof of the third ventricle. The authors review the literature concerning these meningiomas and present a case in which the lesion was resected via the supracerebellar-infratentorial approach. The relationship of the tumor to the deep venous system and the splenium of the corpus callosum guides the selection of the most advantageous surgical approach. Posterior displacement of the internal cerebral veins demonstrated on preoperative imaging provides a strong rationale for use of the supracerebellar-infratentorial approach.
Development and practical application of a stairclimbing wheelchair in Nagasaki
In the field of providing mobility for the elderly or disabled the aspect of dealing with stairs continues largely unresolved. This paper focuses on presenting continued development of the “Nagasaki Stairclimber”, a duel section tracked wheelchair capable of negotiating the large number of twisting and irregular stairs typically encounted by the residents living on the slopes that surround the Nagasaki harbor. Recent developments include an auto guidance system, auto leveling of the chair angle and active control of the frontrear track angle.
Control of TCF-4 Expression by VDR and Vitamin D in the Mouse Mammary Gland and Colorectal Cancer Cell Lines
BACKGROUND The vitamin D receptor (VDR) pathway is important in the prevention and potentially in the treatment of many cancers. One important mechanism of VDR action is related to its interaction with the Wnt/beta-catenin pathway. Agonist-bound VDR inhibits the oncogenic Wnt/beta-catenin/TCF pathway by interacting directly with beta-catenin and in some cells by increasing cadherin expression which, in turn, recruits beta-catenin to the membrane. Here we identify TCF-4, a transcriptional regulator and beta-catenin binding partner as an indirect target of the VDR pathway. METHODOLOGY/PRINCIPAL FINDINGS In this work, we show that TCF-4 (gene name TCF7L2) is decreased in the mammary gland of the VDR knockout mouse as compared to the wild-type mouse. Furthermore, we show 1,25(OH)2D3 increases TCF-4 at the RNA and protein levels in several human colorectal cancer cell lines, the effect of which is completely dependent on the VDR. In silico analysis of the human and mouse TCF7L2 promoters identified several putative VDR binding elements. Although TCF7L2 promoter reporters responded to exogenous VDR, and 1,25(OH)2D3, mutation analysis and chromatin immunoprecipitation assays, showed that the increase in TCF7L2 did not require recruitment of the VDR to the identified elements and indicates that the regulation by VDR is indirect. This is further confirmed by the requirement of de novo protein synthesis for this up-regulation. CONCLUSIONS/SIGNIFICANCE Although it is generally assumed that binding of beta-catenin to members of the TCF/LEF family is cancer-promoting, recent studies have indicated that TCF-4 functions instead as a transcriptional repressor that restricts breast and colorectal cancer cell growth. Consequently, we conclude that the 1,25(OH)2D3/VDR-mediated increase in TCF-4 may have a protective role in colon cancer as well as diabetes and Crohn's disease.
Non-Orthogonal Multiple Access Based Integrated Terrestrial-Satellite Networks
In this paper, we investigate the downlink transmission of a non-orthogonal multiple access (NOMA)-based integrated terrestrial-satellite network, in which the NOMA-based terrestrial networks and the satellite cooperatively provide coverage for ground users while reusing the entire bandwidth. For both terrestrial networks and the satellite network, multi-antennas are equipped and beamforming techniques are utilized to serve multiple users simultaneously. A channel quality-based scheme is proposed to select users for the satellite, and we then formulate the terrestrial user pairing as a max–min problem to maximize the minimum channel correlation between users in one NOMA group. Since the terrestrial networks and the satellite network will cause interference to each other, we first investigate the capacity performance of the terrestrial networks and the satellite networks separately, which can be decomposed into the designing of beamforming vectors and the power allocation schemes. Then, a joint iteration algorithm is proposed to maximize the total system capacity, where we introduce the interference temperature limit for the satellite since the satellite can cause interference to all base station users. Finally, numerical results are provided to evaluate the user paring scheme as well as the total system performance, in comparison with some other proposed algorithms and existing algorithms.
Machine Learning Strategies for Time Series Forecasting
The increasing availability of large amounts of historical data and the need of performing accurate forecasting of future behavior in several scientific and applied domains demands the definition of robust and efficient techniques able to infer from observations the stochastic dependency between past and future. The forecasting domain has been influenced, from the 1960s on, by linear statistical methods such as ARIMA models. More recently, machine learning models have drawn attention and have established themselves as serious contenders to classical statistical models in the forecasting community. This chapter presents an overview of machine learning techniques in time series forecasting by focusing on three aspects: the formalization of one-step forecasting problems as supervised learning tasks, the discussion of local learning techniques as an effective tool for dealing with temporal data and the role of the forecasting strategy when we move from one-step to multiple-step forecasting.
Thermomechanical properties of mineralized nitrogen-doped carbon nanotube/polymer nanocomposites by molecular dynamics simulations
Abstract In this paper, we investigate thermomechanical characteristics of silica-mineralized nitrogen-doped carbon nanotube (SC-NCNT)-reinforced poly (methyl methacrylate) (PMMA) nanocomposites for the first time by molecular dynamics simulations. An in-situ mineralization algorithm is employed for the mineralization of the silica layer where the thickness is determined from an atomic stress calculation. Young's modulus, shear modulus, and yield strength of SC-NCNT/PMMA systems are compared with those of nitrogen-doped carbon nanotube(NCNT)/PMMA and carbon nanotube (CNT)/PMMA systems at various filler weight percentage (wt%). Compared with the reinforcing effect of CNTs, SC-NCNT fillers revealed a superior reinforcing effect in the transverse direction. Additionally, the glass transition temperature ( T g ) of nanocomposites at different filler wt% is studied. Both the SC-NCNT/PMMA and the CNT/PMMA system show a T g decrease according to the filler wt% where the former decreases only 6 K and the latter decreases 31 K at 3 wt%. The enhanced thermomechanical properties of the SC-NCNT/PMMA system are attributed to the improved interfacial interaction between the silica layer and the PMMA matrix. Since thermomechanical properties of SC-NCNT/PMMA nanocomposites have not been studied since the discovery of the SC-NCNT filler, our simulation study can be a good guide for experimental researches on the thermomechanical behavior of mineralized CNTs.
Story plot generation based on CBR
In this paper we present a system for automatic story generation that reuses existing stories to produce a new story that matches a given user query. The plot structure is obtained by a case-based reasoning (CBR) process over a case base of tales and an ontology of explicitly declared relevant knowledge. The resulting story is generated as a sketch of a plot described in natural language by means of natural language generation (NLG) techniques.
AND/OR Multi-Valued Decision Diagrams (AOMDDs) for Graphical Models
Inspired by AND/OR search spaces for graphical models recen tly introduced, we propose to augment Multi-Valued Decision Diagrams (MDD) with AND node s, in order to capture function decomposition structure and to extend these compiled data s tructures to general weighted graphical models (e.g., probabilistic models). We present the AND/OR Multi-Valued Decision Diagram (AOMDD) which compiles a graphical model into a canonical fo rm that supports polynomial (e.g., solution counting, belief updating) or constant time (e.g. quivalence of graphical models) queries. We provide two algorithms for compiling the AOMDD of a graphi cal model. The first is search based, and works by applying reduction rules to the trace of t he memory intensive AND/OR search algorithm. The second algorithm is based on a Bucket Elimina tio schedule for assembling the AOMDD for a graphical model starting from the AOMDDs for its f unctions, and combining them via theAPPLY operator. For both algorithms, the compilation time and the siz of the AOMDD are, in the worst case, exponential in the tr ewidthof the graphical model, rather than pathwidthas is known for ordered binary decision diagrams (OBDDs). We also intr duce the concept of semantic treewidth, which helps explain why the size of a decision diagram is oft en much smaller than the worst case bound.
Credit Card Customer Segmentation and Target Marketing Based on Data Mining
Based on the real data of a Chinese commercial bank’s credit card, in this paper, we classify the credit card customers into four classifications by K-means. Then we built forecasting models separately based on four data mining methods such as C5.0, neural network, chi-squared automatic interaction detector, and classification and regression tree according to the background information of the credit cards holders. Conclusively, we obtain some useful information of decision tree regulation by the best model among the four. The information is not only helpful for the bank to understand related characteristics of different customers, but also marketing representatives to find potential customers and to implement target marketing.
Genome-wide association study and admixture mapping identify different asthma-associated loci in Latinos: the Genes-environments & Admixture in Latino Americans study.
BACKGROUND Asthma is a complex disease with both genetic and environmental causes. Genome-wide association studies of asthma have mostly involved European populations, and replication of positive associations has been inconsistent. OBJECTIVE We sought to identify asthma-associated genes in a large Latino population with genome-wide association analysis and admixture mapping. METHODS Latino children with asthma (n = 1893) and healthy control subjects (n = 1881) were recruited from 5 sites in the United States: Puerto Rico, New York, Chicago, Houston, and the San Francisco Bay Area. Subjects were genotyped on an Affymetrix World Array IV chip. We performed genome-wide association and admixture mapping to identify asthma-associated loci. RESULTS We identified a significant association between ancestry and asthma at 6p21 (lowest P value: rs2523924, P < 5 × 10(-6)). This association replicates in a meta-analysis of the EVE Asthma Consortium (P = .01). Fine mapping of the region in this study and the EVE Asthma Consortium suggests an association between PSORS1C1 and asthma. We confirmed the strong allelic association between SNPs in the 17q21 region and asthma in Latinos (IKZF3, lowest P value: rs90792, odds ratio, 0.67; 95% CI, 0.61-0.75; P = 6 × 10(-13)) and replicated associations in several genes that had previously been associated with asthma in genome-wide association studies. CONCLUSIONS Admixture mapping and genome-wide association are complementary techniques that provide evidence for multiple asthma-associated loci in Latinos. Admixture mapping identifies a novel locus on 6p21 that replicates in a meta-analysis of several Latino populations, whereas genome-wide association confirms the previously identified locus on 17q21.
Mobile Assisted Language Learning: A Literature Review
Mobile assisted language learning (MALL) is a subarea of the growing field of mobile learning (mLearning) research which increasingly attracts the attention of scholars. This study provides a systematic review of MALL research within the specific area of second language acquisition during the period 2007 2012 in terms of research approaches, methods, theories and models, as well as results in the form of linguistic knowledge and skills. The findings show that studies of mobile technology use in different aspects of language learning support the hypothesis that mobile technology can enhance learners’ second language acquisition. However, most of the reviewed studies are experimental, small-scale, and conducted within a short period of time. There is also a lack of cumulative research; most theories and concepts are used only in one or a few papers. This raises the issue of the reliability of findings over time, across changing technologies, and in terms of scalability. In terms of gained linguistic knowledge and skills, attention is primarily on learners’ vocabulary acquisition, listening and speaking skills, and language acquisition in more general terms. Author
THE REDUCTION OF DISULFIDE BONDS IN PROTEINS AT MERCURY ELECTRODES
Abstract The adsorption of disulfide containing proteins on mercury and subsequent reduction is reviewed. Methods for determining the protein surface excess and number of electroactive disulfide bonds are discussed. Peaks in up to three potential regions (I, II and III) are observed depending on experimental conditions. Peaks in regions I and III have not been as extensively studied as peak II which is attributed to the reversible or quasi-reversible reduction of a mercury-protein thiolate bonds which are formed from disulfide nonds located in hydrophobic regions of a protein as well as certain sulfhydryl groups and thioethers.
Determinants of users' intention to adopt m-commerce: an empirical analysis
The fast-growing penetration of mobile devices and recent advances in mobile technologies have led to the development of increasingly sophisticated services such as m-shopping for goods or services and m-payment. However, although the number of mobile subscribers is increasing, levels of actual m-commerce activities in many cases remain low. Determining what influences users’ intention to use m-commerce is therefore of growing importance. The purpose of this study was to investigate possible factors. To this aim, we developed a conceptual user adoption model based on technology acceptance model variables and on specific factors such as social influence, personal innovativeness, customization, and individual mobility. The empirical results show that social influence and customization significantly affect perceived usefulness; mobility, customization, and personal innovativeness significantly affect perceived ease of use; and perceived usefulness and perceived ease of use have a direct positive effect on behavioral intention.
Bidirectional A* Search with Additive Approximation Bounds
In this paper, we present new theoretical and experimental results for bidirectional A∗ search. Unlike most previous research on this topic, our results do not require assumptions of either consistent or balanced heuristic functions for the search. Our theoretical work examines new results on the worst-case number of node expansions for inconsistent heuristic functions with bounded estimation errors. Additionally, we consider several alternative termination criteria in order to more quickly terminate the bidirectional search, and we provide worst-case approximation bounds for our suggested criteria. We prove that our approximation bounds are purely additive in nature (a general improvement over previous multiplicative approximations). Experimental evidence on large-scale road networks suggests that the errors introduced are truly quite negligible in practice, while the performance gains are significant.
Ramp-based soft-start circuit with soft-recovery for DC-DC buck converters
A soft-start circuit with soft-recovery function for DC-DC converters is presented in this paper. The soft-start strategy is based on a linearly ramped-up reference and an error amplifier with minimum selector implemented with a three-limb differential pair skillfully. The soft-recovery strategy is based on a compact clamp circuit. The ramp voltage would be clamped once the feedback voltage is detected lower than a threshold, which could control the output to be recovered slowly and linearly. A monolithic DC-DC buck converter with proposed circuit has been fabricated with a 0.5μm CMOS process for validation. The measurement result shows that the ramp-based soft-start and soft-recovery circuit have good performance and agree well with the theoretical analysis.
Backward Stochastic Differential Equation , Nonlinear Expectation and Their Applications
We give a survey of the developments in the theory of Backward Stochastic Differential Equations during the last 20 years, including the solutions’ existence and uniqueness, comparison theorem, nonlinear Feynman-Kac formula, g-expectation and many other important results in BSDE theory and their applications to dynamic pricing and hedging in an incomplete financial market. We also present our new framework of nonlinear expectation and its applications to financial risk measures under uncertainty of probability distributions. The generalized form of law of large numbers and central limit theorem under sublinear expectation shows that the limit distribution is a sublinear Gnormal distribution. A new type of Brownian motion, G-Brownian motion, is constructed which is a continuous stochastic process with independent and stationary increments under a sublinear expectation (or a nonlinear expectation). The corresponding robust version of Itô’s calculus turns out to be a basic tool for problems of risk measures in finance and, more general, for decision theory under uncertainty. We also discuss a type of “fully nonlinear” BSDE under nonlinear expectation. Mathematics Subject Classification (2010). 60H, 60E, 62C, 62D, 35J, 35K
Nighttime aircraft noise impairs endothelial function and increases blood pressure in patients with or at high risk for coronary artery disease
Epidemiological studies suggest the existence of a relationship between aircraft noise exposure and increased risk for myocardial infarction and stroke. Patients with established coronary artery disease and endothelial dysfunction are known to have more future cardiovascular events. We therefore tested the effects of nocturnal aircraft noise on endothelial function in patients with or at high risk for coronary artery disease. 60 Patients (50p 1–3 vessels disease; 10p with a high Framingham Score of 23 %) were exposed in random and blinded order to aircraft noise and no noise conditions. Noise was simulated in the patients’ bedroom and consisted of 60 events during one night. Polygraphy was recorded during study nights, endothelial function (flow-mediated dilation of the brachial artery), questionnaires and blood sampling were performed on the morning after each study night. The mean sound pressure levels L eq(3) measured were 46.9 ± 2.0 dB(A) in the Noise 60 nights and 39.2 ± 3.1 dB(A) in the control nights. Subjective sleep quality was markedly reduced by noise from 5.8 ± 2.0 to 3.7 ± 2.2 (p < 0.001). FMD was significantly reduced (from 9.6 ± 4.3 to 7.9 ± 3.7 %; p < 0.001) and systolic blood pressure was increased (from 129.5 ± 16.5 to 133.6 ± 17.9 mmHg; p = 0.030) by noise. The adverse vascular effects of noise were independent from sleep quality and self-reported noise sensitivity. Nighttime aircraft noise markedly impairs endothelial function in patients with or at risk for cardiovascular disease. These vascular effects appear to be independent from annoyance and attitude towards noise and may explain in part the cardiovascular side effects of nighttime aircraft noise.
Extremely low drift of resistance and threshold voltage in amorphous phase change nanowire devices
Time-dependent drift of resistance and threshold voltage in phase change memory (PCM) devices is of concern as it leads to data loss. Electrical drift in amorphous chalcogenides has been argued to be either due to electronic or stress relaxation mechanisms. Here we show that drift in amorphized Ge2Sb2Te5 nanowires with exposed surfaces is extremely low in comparison to thin-film devices. However, drift in stressed nanowires embedded under dielectric films is comparable to thin-films. Our results shows that drift in PCM is due to stress relaxation and will help in understanding and controlling drift in PCM devices. Disciplines Engineering | Materials Science and Engineering Comments Suggested Citation: Mitra, M., Y. Jung, D.S. Gianola, R. Agarwal. (2010). "Extremely low drift of resistance and threshold voltage in amorphous phase change nanowire devices." Applied Physics Letters. Vol. 96, 222111. © 2010 American Institute of Physics. This article may be downloaded for personal use only. Any other use requires prior permission of the author and the American Institute of Physics. The following article appeared in Applied Physics Letters and may be found at http://dx.doi.org/10.1063/ 1.3447941. This journal article is available at ScholarlyCommons: http://repository.upenn.edu/mse_papers/181 Extremely low drift of resistance and threshold voltage in amorphous phase change nanowire devices Mukut Mitra, Yeonwoong Jung, Daniel S. Gianola, and Ritesh Agarwal Department of Materials Science and Engineering, University of Pennsylvania, 3231 Walnut Street, Philadelphia, Pennsylvania 19104-6272, USA Received 15 April 2010; accepted 15 May 2010; published online 4 June 2010 Time-dependent drift of resistance and threshold voltage in phase change memory PCM devices is of concern as it leads to data loss. Electrical drift in amorphous chalcogenides has been argued to be either due to electronic or stress relaxation mechanisms. Here we show that drift in amorphized Ge2Sb2Te5 nanowires with exposed surfaces is extremely low in comparison to thin-film devices. However, drift in stressed nanowires embedded under dielectric films is comparable to thin-films. Our results shows that drift in PCM is due to stress relaxation and will help in understanding and controlling drift in PCM devices. © 2010 American Institute of Physics. doi:10.1063/1.3447941 Electric-field induced structural phase transformations in chalcogenides have attracted significant interest due to their potential use in nonvolatile phase change memory PCM devices. Chalcogenides e.g., Ge–Sb–Te alloys are particularly important for PCM owing to their fast and reversible crystalline to amorphous phase change properties via Joule heating to produce electrically distinct states. However, various challenges including understanding their structural, electronic, thermal, and mechanical properties especially in the amorphous state, the effect of field on electrical switching, and device scalability needs to be addressed before PCM can become a viable alternative to flash technology. Chalcogenide glasses are affected by relaxation processes that occur on a large distribution of timescales resulting in time-dependent electrical, optical, mechanical, and thermal behavior. For PCM device operation, the amorphous state resistance and the field strength at which it switches to a higher conducting state threshold voltage, Vth are fundamental parameters that determine device reliability. Any phenomenon that affects these critical parameters leading to temporal drift in PCM is an important issue that needs to be investigated as they lead to changes in the measurable device characteristics used for recording and reading the information. This issue becomes particularly important for multilevel memory devices, as drifts in resistance would lead to states being overwritten causing serious errors. Therefore, it becomes important to understand the physical origin of phenomena that cause drift in material properties to eventually minimize such effects. The drift of amorphous phase resistance in PCM has been described by a power law,
CNN-Based Joint Clustering and Representation Learning with Feature Drift Compensation for Large-Scale Image Data
Given a large unlabeled set of images how to efficiently and effectively group them into clusters based on extracted visual representations remains a challenging problem. To address this problem we propose a convolutional neural network (CNN) to jointly solve clustering and representation learning in an iterative manner. In the proposed method given an input image set we first randomly pick k samples and extract their features as initial cluster centroids using the proposed CNN with an initial model pretrained from the ImageNet dataset. Mini-batch k-means is then performed to assign cluster labels to individual input samples for a mini-batch of images randomly sampled from the input image set until all images are processed. Subsequently the proposed CNN simultaneously updates the parameters of the proposed CNN and the centroids of image clusters iteratively based on stochastic gradient descent. We also propose a feature drift compensation scheme to mitigate the drift error caused by feature mismatch in representation learning. Experimental results demonstrate the proposed method outperforms start-of-the-art clustering schemes in terms of accuracy and storage complexity on large-scale image sets containing millions of images.
Unhealthy substance-use behaviors as symptom-related self-care in persons with HIV/AIDS.
Unhealthy substance-use behaviors, including a heavy alcohol intake, illicit drug use, and cigarette smoking, are engaged in by many HIV-positive individuals, often as a way to manage their disease-related symptoms. This study, based on data from a larger randomized controlled trial of an HIV/AIDS symptom management manual, examines the prevalence and characteristics of unhealthy behaviors in relation to HIV/AIDS symptoms. The mean age of the sample (n = 775) was 42.8 years and 38.5% of the sample was female. The mean number of years living with HIV was 9.1 years. The specific self-reported unhealthy substance-use behaviors were the use of marijuana, cigarettes, a large amount of alcohol, and illicit drugs. A subset of individuals who identified high levels of specific symptoms also reported significantly higher substance-use behaviors, including amphetamine and injection drug use, heavy alcohol use, cigarette smoking, and marijuana use. The implications for clinical practice include the assessment of self-care behaviors, screening for substance abuse, and education of persons regarding the self-management of HIV.
Dialogue Act Semantic Representation and Classification Using Recurrent Neural Networks
In this work, we present a model that incorporates Dialogue Act (DA) semantics in the framework of Recurrent Neural Networks (RNNs) for DA classification. Specifically, we propose a novel scheme for automatically encoding DA semantics via the extraction of salient keywords that are representative of the DA tags. The proposed model is applied to the Switchboard corpus and achieves 1.7% (absolute) improvement in classification accuracy with respect to the baseline model. We demonstrate that the addition of discourse-level features enhances the DA classification as well as makes the algorithm more robust: the proposed model does not require the preprocessing of dialogue transcriptions.
HEER: Heterogeneous graph embedding for emerging relation detection from news
Real-world knowledge is growing rapidly nowadays. New entities arise with time, resulting in large volumes of relations that do not exist in current knowledge graphs (KGs). These relations containing at least one new entity are called emerging relations. They often appear in news, and hence the latest information about new entities and relations can be learned from news timely. In this paper, we focus on the problem of discovering emerging relations from news. However, there are several challenges for this task: (1) at the beginning, there is little information for emerging relations, causing problems for traditional sentence-based models; (2) no negative relations exist in KGs, creating difficulties in utilizing only positive cases for emerging relation detection from news; and (3) new relations emerge rapidly, making it necessary to keep KGs up to date with the latest emerging relations. In order to address these issues, we start from a global graph perspective and propose a novel Heterogeneous graph Embedding framework for Emerging Relation detection (HEER) that learns a classifier from positive and unlabeled instances by utilizing information from both news and KGs. Furthermore, we implement HEER in an incremental manner to timely update KGs with the latest detected emerging relations. Extensive experiments on real-world news datasets demonstrate the effectiveness of the proposed HEER model.
Trigonal injection of botulinum toxin A in patients with refractory bladder pain syndrome/interstitial cystitis.
BACKGROUND Bladder pain syndrome/interstitial cystitis (BPS/IC) is a chronic disease without an effective treatment, characterized by pain during bladder filling. Most nociceptive bladder afferents course in the trigone. OBJECTIVE To evaluate efficacy and tolerability of trigonal injection of botulinum toxin A (BoNTA) in patients with BPS/IC. Urine concentration of nerve growth factor (NGF) and brain-derived neurotrophic factor (BDNF) were also evaluated. DESIGN, SETTING, AND PARTICIPANTS Women with refractory BPS/IC were included in an open, exploratory study. INTERVENTION Under sedation, 100 U of BoNTA (Botox) were injected in 10 trigonal sites (10 U per 1 ml saline). Retreatment was allowed 3 mo after injection. MEASUREMENTS Pain, urinary frequency, O'Leary-Sant score (OSS), quality of life, (QoL), and urodynamic testing at 1 and 3 mo and every 3 mo thereafter. Urine NGF and BDNF were assessed at the same points. Patients who were retreated were evaluated every 3 mo. RESULTS AND LIMITATIONS All patients reported subjective improvement at 1- and 3-mo follow-up. Pain, daytime and nighttime voiding frequency, OSS, and QoL improved significantly. Bladder volume to first pain and maximal cystometric capacity more than doubled. Treatment remained effective in >50% of the patients for 9 mo. Retreatment was also effective in all cases, with similar duration. A significant, transient reduction in urinary NGF and BDNF was observed. No cases of voiding dysfunction occurred. The low number of patients and the lack of a placebo arm are obvious limitations of this study. CONCLUSIONS Trigonal injection of BoNTA is a safe and effective treatment for refractory BPS/IC.
Congestion Attacks to Autonomous Cars Using Vehicular Botnets
The increasing popularity and acceptance of VANETs will make the deployment of autonomous vehicles easier and faster since the VANET will reduce dependence on expensive sensors. However, these benefits are counterbalanced by possible security attacks. We demonstrate a VANET-based botnet attack in an autonomous vehicle scenario that can cause serious congestion by targeting hot spot road segments. We show via simulation that the attack can increase the trip times of the cars in the targeted area by orders of magnitude. After 5 minutes, the targeted road becomes completely unusable. More importantly, the effect of such an attack is not confined to a specific hotspot; the congestion can spread to multiple roads and significantly affect the entire urban grid. We show that current countermeasures are not effective, and point to new possible defenses.
Pathogen-free, plasma-poor platelet lysate and expansion of human mesenchymal stem cells
Supplements to support clinical-grade cultures of mesenchymal stem cells (MSC) are required to promote growth and expansion of these cells. Platelet lysate (PL) is a human blood component which may replace animal serum in MSC cultures being rich in various growth factors. Here, we describe a plasma poor pathogen-free platelet lysate obtained by pooling 12 platelet (PLT) units, to produce a standardized and safe supplement for clinical-grade expansion of MSC. PL lots were obtained by combining 2 6-unit PLT pools in additive solution (AS) following a transfusional-based procedure including pathogen inactivation (PI) by Intercept technology and 3 cycles of freezing/thawing, followed by membrane removal. Three PI-PL and 3 control PL lots were produced to compare their ability to sustain bone marrow derived MSC selection and expansion. Moreover, two further PL, subjected to PI or not, were also produced starting from the same initial PLT pools to evaluate the impact of PI on growth factor concentration and capacity to sustain cell growth. Additional PI-PL lots were used for comparison with fetal bovine serum (FBS) on MSC expansion. Immunoregulatory properties of PI-PL-generated MSC were documented in vitro by mixed lymphocyte culture (MLC) and peripheral blood mononuclear cells (PBMC) mitogen induced proliferation. PI-PL and PL control lots had similar concentrations of 4 well-described growth factors endowed with MSC stimulating ability. Initial growth and MSC expansion by PI-PL and PL controls were comparable either using different MSC populations or in head to head experiments. Moreover, PI-PL and PL control sustained similar MSC growth of frozen/thawed MSC. Multilineage differentiation of PI-derived and PI-PL-derived MSC were maintained in any MSC cultures as well as their immunoregulatory properties. Finally, no direct impact of PI on growth factor concentration and MSC growth support was observed, whereas the capacity of FBS to sustain MSC expansion in basic medium was irrelevant as compared to PL and PI-PL. The replacement of animal additives with human supplements is a basic issue in MSC ex vivo production. PI-PL represents a standardized, plasma-poor, human preparation which appears as a safe and good candidate to stimulate MSC growth in clinical-scale cultures.
Tajik-Farsi Persian Transliteration Using Statistical Machine Translation
Tajik Persian is a dialect of Persian spoken primarily in Tajikistan and written with a modified Cyrillic alphabet. Iranian Persian, or Farsi, as it is natively called, is the lingua franca of Iran and is written with the Persian alphabet, a modified Arabic script. Although the spoken versions of Tajik and Farsi are mutually intelligible to educated speakers of both languages, the difference between the writing systems constitutes a barrier to text compatibility between the two languages. This paper presents a system to transliterate text between these two different Persian dialects that use incompatible writing systems. The system also serves as a mechanism to facilitate sharing of computational linguistic resources between the two languages. This is relevant because of the disparity in resources for Tajik versus Farsi.
A curriculum for teaching computer science through computational textiles
The field of computational textiles has shown promise as a domain for diversifying computer science culture by drawing a population with broad and non-traditional interests and backgrounds into creating technology. In this paper, we present a curriculum that teaches computer science and computer programming through a series of activities that involve building and programming computational textiles. We also describe two new technological tools, Modkit and the LilyPad ProtoSnap board, that support implementation of the curriculum. In 2011-12, we conducted three workshops to evaluate the impact of our curriculum and tools on students' technological self-efficacy. We conclude that our curriculum both draws a diverse population, and increases students' comfort with, enjoyment of, and interest in working with electronics and programming.
E-S-QUAL A Multiple-Item Scale for Assessing Electronic Service Quality
Using the means-end framework as a theoretical foundation, this article conceptualizes, constructs, refines, and tests a multiple-item scale (E-S-QUAL) for measuring the service quality delivered by Web sites on which customers shop online. Two stages of empirical data collection revealed that two different scales were necessary for capturing electronic service quality. The basic E-S-QUAL scale developed in the research is a 22-item scale of four dimensions: efficiency, fulfillment, system availability, and privacy. The second scale, E-RecS-QUAL, is salient only to customers who had nonroutine encounters with the sites and contains 11 items in three dimensions: responsiveness, compensation, and contact. Both scales demonstrate good psychometric properties based on findings from a variety of reliability and validity tests and build on the research already conducted on the topic. Directions for further research on electronic service quality are offered. Managerial implications stemming from the empirical findings about E-S-QUAL are also discussed.
Attachment and the regulation of the right brain
It has been three decades since John Bowlby Žrst presented an over-arching model of early human development in his groundbreaking volume, Attachment. In the present paper I refer back to Bowlby’s original charting of the attachment landscape in order to suggest that current research and clinical models need to return to the integration of the psychological and biological underpinnings of the theory. Towards that end, recent contributions from neuroscience are offered to support Bowlby’s assertions that attachment is instinctive behavior with a biological function, that emotional processes lie at the foundation of a model of instinctive behavior, and that a biological control system in the brain regulates affectively driven instinctive behavior. This control system can now be identiŽed as the orbitofrontal system and its cortical and subcortical connections. This ‘senior executive of the emotional brain’ acts as a regulatory system, and is expanded in the right hemisphere, which is dominant in human infancy and centrally involved in inhibitory control. Attachment theory is essentially a regulatory theory, and attachment can be deŽned as the interactive regulation of biological synchronicity between organisms. This model suggests that future directions of attachment research should focus upon the early-forming psychoneurobiological mechanisms that mediate both adaptive and maladaptive regulatory processes. Such studies will have direct applications to the creation of more effective preventive and treatment methodologies.
Adsorption of volatile sulphur compounds onto modified activated carbons: effect of oxygen functional groups.
The effect of physical and chemical properties of activated carbon (AC) on the adsorption of ethyl mercaptan, dimethyl sulphide and dimethyl disulphide was investigated by treating a commercial AC with nitric acid and ozone. The chemical properties of ACs were characterised by temperature programme desorption and X-ray photoelectron spectroscopy. AC treated with nitric acid presented a larger amount of oxygen functional groups than materials oxidised with ozone. This enrichment allowed a significant improvement on adsorption capacities for ethyl mercaptan and dimethyl sulphide but not for dimethyl disulphide. In order to gain a deeper knowledge on the effect of the surface chemistry of AC on the adsorption of volatile sulphur compounds, the quantum-chemical COSMO-RS method was used to simulate the interactions between AC surface groups and the studied volatile sulphur compounds. In agreement with experimental data, this model predicted a greater affinity of dimethyl disulphide towards AC, unaffected by the incorporation of oxygen functional groups in the surface. Moreover, the model pointed out to an increase of the adsorption capacity of AC by the incorporation of hydroxyl functional groups in the case of ethyl mercaptan and dimethyl sulphide due to the hydrogen bond interactions.
Factors affecting attitudes and intentions towards knowledge sharing in the Dubai Police Force
This study contributes to the limited research base on knowledge sharing in public sector organisations, specifically police forces, and organisations in the Middle East through a case study investigation into the factors that affect knowledge sharing in the Dubai Police Force. A questionnaire-based survey was conducted with staff in key departments in the Dubai Police Force. Informed by the literature and by interviews conducted in a previous phase, the core of the questionnaire was a bank of Likert-style questions covering the dependent variables intention to knowledge share, and attitude towards knowledge sharing, and the independent variables, trust, organisational structure, leadership, reward, time, and information technology. Data was analysed using structured equation modelling, in order to test the measurement model using confirmatory factor analysis, and to test the structural model. The structural model Purchase Export
Led-based optical cochlear implant on highly flexible triple layer polyimide substrates
This paper reports on the design, fabrication, assembly, as well as the optical, mechanical and thermal characterization of a novel MEMS-based optical cochlear implant (OCI). Building on advances in optogenetics, it will enable the optical stimulation of neural activity in the auditory pathway at 10 independently controlled spots. The optical stimulation of the spiral ganglion neurons (SGNs) promises a pronounced increase in the number of discernible acoustic frequency channels in comparison with commercial cochlear implants based on the electrical stimulation. Ten high-efficiency light-emitting diodes are integrated as a linear array onto an only 12-μm-thick highly flexible polyimide substrate with three metal and three polyimide layers. The high mechanical flexibility of this novel OCI enables its insertion into a 300 μm wide channel with an outer bending radius of 1 mm. The 2 cm long and only 240 μm wide OCI is electrically passivated with a thin layer of Cy-top™.
Behavioral interventions and counseling to prevent child abuse and neglect: a systematic review to update the US Preventive services task force recommendation.
BACKGROUND In 2004, the U.S. Preventive Services Task Force determined that evidence was insufficient to recommend behavioral interventions and counseling to prevent child abuse and neglect. PURPOSE To review new evidence on the effectiveness of behavioral interventions and counseling in health care settings for reducing child abuse and neglect and related health outcomes, as well as adverse effects of interventions. DATA SOURCES MEDLINE and PsycINFO (January 2002 to June 2012), Cochrane Central Register of Controlled Trials and Cochrane Database of Systematic Reviews (through the second quarter of 2012), Scopus, and reference lists. STUDY SELECTION English-language trials of the effectiveness of behavioral interventions and counseling and studies of any design about adverse effects. DATA EXTRACTION Investigators extracted data about study populations, designs, and outcomes and rated study quality using established criteria. DATA SYNTHESIS Eleven fair-quality randomized trials of interventions and no studies of adverse effects met inclusion criteria. A trial of risk assessment and interventions for abuse and neglect in pediatric clinics for families with children aged 5 years or younger indicated reduced physical assault, Child Protective Services (CPS) reports, nonadherence to medical care, and immunization delay among screened children. Ten trials of early childhood home visitation reported reduced CPS reports, emergency department visits, hospitalizations, and self-reports of abuse and improved adherence to immunizations and well-child care, although results were inconsistent. LIMITATION Trials were limited by heterogeneity, low adherence, high loss to follow-up, and lack of standardized measures. CONCLUSION Risk assessment and behavioral interventions in pediatric clinics reduced abuse and neglect outcomes for young children. Early childhood home visitation also reduced abuse and neglect, but results were inconsistent. Additional research on interventions to prevent child abuse and neglect is needed. PRIMARY FUNDING SOURCE Agency for Healthcare Research and Quality.
A printed log-periodic dipole antenna with balanced feed structure
A printed log-periodic dipole antenna (PLDPA) with balanced feed structure is proposed. The traditional feeding structure using a single coaxial cable and feed lines with equal width will lead to an unbalanced current distribution, thus, the radiation pattern performances a frequency dependent deflection status. In this paper, we have modified the width of two feed lines to compensate the effect of the soldered external shield of the coaxial cable, i.e., obtain an equivalent electroconductibility on both sides of feed lines. On one side of substrate, the external shield of the coaxial feeder is soldered on the narrower feed line while the inner conductor is connected to the wider one on the other side. The simulated return loss is less than -10 dB from 0.5 GHz to 3 GHz (6 : 1). The simulated gain varies between 7 dB and 7.5 dB. The antenna's behaviour has been improved and the radiation pattern become stable when the proposed feeding structure was employed.
Predictive factors for the effect of the α1-D/A adrenoceptor antagonist naftopidil on subjective and objective criteria in patients with neurogenic lower urinary tract dysfunction.
OBJECTIVES • To assess the effect of α1-D/A adrenoceptor antagonist naftopidil on patients with neurogenic lower urinary tract dysfunction (NLUTD) and voiding difficulty. • To explore the effectiveness of naftopidil in these patients by using urodynamic variables, including pressure flow study (PFS), and to find good and simple parameters (International Prostate Symptom Score (IPSS), Post-void residual urine (PVR), and uroflowmetry (UFM) parameters) as substitution of PFS for predicting the effect of naftopidil. PATIENTS AND METHODS • The main inclusion and exclusion criteria were, IPSS ≥8, voiding symptoms with IPSS ≥5, IPSS-quality of life (QOL) ≥2, PVR ≥50 mL, and without prostatic enlargement ≥ 20 mL. • After initial assessment, patients were stepwisely administered for 12 weeks with the following: placebo for 2 weeks, naftopidil 25 mg/day for 2 weeks, naftopidil 50 mg/day for 2 weeks, and naftopidil 75 mg/day for 6 weeks. At the end of both placebo and 6 weeks' naftopidil 75 mg/day, their IPSS, UFM, PVR, and PFS were assessed. • A total of 82 Japanese patients (men 40, women 42) with lower urinary tract symptoms complicated by NLUTD, with a mean age of 63.9 years, were included from private or institutional clinics. • The lesions were spinal cord 42, and peripheral nervous system 40. The spinal cord lesions were all lumbar spine (injury or lumbar canal stenosis). RESULTS • In all patients, pressure at maximum urinary flow rate (P(det) Q(max) ) in PFS significantly decreased (P < 0.05), and maximum urinary flow rate in UFM significantly increased (P < 0.01). Analysis of data for men and for women also showed a significant decrease in PVR, %PVR, and total IPSS score. • The degree of improvement of voided volume, PVR (%), and IPSS in patients with PVR <300 mL was significantly greater than those in patients with PVR ≥300 mL. • The degree of improvement of P(det) Q(max) in PFS, and IPSS in patients with bladder contractility was significantly greater than that in patients without bladder contractility. CONCLUSIONS • α1-D/A adrenoceptor antagonist naftopidil has a significant effect on both symptoms and urodynamic variables of patients of both genders with NLUTD in Japan. • PVR <300 mL and bladder contractility are predictive factors for the efficacy of naftopidil on patients with NLUTD.
Interpretable and Informative Explanations of Outcomes
In this paper, we solve the following data summarization problem: given a multi-dimensional data set augmented with a binary attribute, how can we construct an interpretable and informative summary of the factors affecting the binary attribute in terms of the combinations of values of the dimension attributes? We refer to such summaries as explanation tables. We show the hardness of constructing optimally-informative explanation tables from data, and we propose effective and efficient heuristics. The proposed heuristics are based on sampling and include optimizations related to computing the information content of a summary from a sample of the data. Using real data sets, we demonstrate the advantages of explanation tables compared to related approaches that can be adapted to solve our problem, and we show significant performance benefits of our optimizations.
Overexpression and properties of a new thermophilic and thermostable esterase from Bacillus acidocaldarius with sequence similarity to hormone-sensitive lipase subfamily.
We previously purified a new esterase from the thermoacidophilic eubacterium Bacillus acidocaldarius whose N-terminal sequence corresponds to an open reading frame (ORF3) reported to show homology with the mammalian hormone-sensitive lipase (HSL)-like group of the esterase/lipase family. To compare the biochemical properties of this thermophilic enzyme with those of the homologous mesophilic and psychrophilic members of the HSL group, an overexpression system in Escherichia coli was established. The protein, expressed in soluble and active form at 10 mg/l E. coli culture, was purified to homogeneity and characterized biochemically. The enzyme, a 34 kDa monomeric protein, was demonstrated to be a B'-type carboxylesterase (EC 3.1.1.1) on the basis of substrate specificity and the action of inhibitors. Among the p-nitrophenyl (PNP) esters tested the best substrate was PNP-exanoate with Km and kcat values of 11+/-2 microM (mean+/-S.D., n=3) and 6610+/-880 s-1 (mean+/-S.D., n=3) respectively at 70 degreesC and pH7.1. In spite of relatively high sequence identity with the mammalian HSLs, the psychrophilic Moraxella TA144 lipase 2 and the human liver arylacetamide deacetylase, no lipase or amidase activity was detected. A series of substrates were tested for enantioselectivity. Substantial enantioselectivity was observed only in the resolution of (+/-)-3-bromo-5-(hydroxymethyl)-Delta2-isoxazoline, where the (R)-product was obtained with an 84% enantiomeric excess at 36% conversion. The enzyme was also able to synthesize acetyl esters when tested in vinyl acetate and toluene. Inactivation by diethylpyrocarbonate, diethyl-p-nitrophenyl phosphate, di-isopropylphosphofluoridate (DFP) and physostigmine, as well as labelling with [3H]DFP, supported our previous suggestion of a catalytic triad made up of Ser-His-Asp. The activity-stability-temperature relationship is discussed in relation to those of the homologous members of the HSL group.
REAL-TIME CONSTRAINED TRAJECTORY GENERATION APPLIED TO A FLIGHT CONTROL EXPERIMENT
A computational approach to generate real-time, optimal trajectories for a flight control experiment is presented. Minimum time trajectories are computed for hover-to-hover and forward flight maneuvers. Instantaneous changes in the trajectory constraints that model obstacles and threats are also investigated. Experimental results using the Nonlinear Trajectory Generation software package show good closedloop performance for all maneuvers. Success of the algorithm demonstrates that highconfidence real-time trajectory generation is achievable in spite of the highly nonlinear and non-convex nature of the problem.
A generic neural acoustic beamforming architecture for robust multi-channel speech processing
Acoustic beamforming can greatly improve the performance of Automatic Speech Recognition (ASR) and speech enhancement systems when multiple channels are available. We recently proposed a way to support the model-based Generalized Eigenvalue beamforming operation with a powerful neural network for spectral mask estimation. The enhancement system has a number of desirable properties. In particular, neither assumptions need to be made about the nature of the acoustic transfer function (e.g., being anechonic), nor does the array configuration need to be known. While the system has been originally developed to enhance speech in noisy environments, we show in this article that it is also effective in suppressing reverberation, thus leading to a generic trainable multichannel speech enhancement system for robust speech processing. To support this claim, we consider two distinct datasets: The CHiME 3 challenge, which features challenging real-world noise distortions, and the Reverb challenge, which focuses on distortions caused by reverberation. We evaluate the system both with respect to a speech enhancement and a recognition task. For the first task we propose a new way to cope with the distortions introduced by the Generalized Eigenvalue beamformer by renormalizing the target energy for each frequency bin, and measure its effectiveness in terms of the PESQ score. For the latter we feed the enhanced signal to a strong DNN back-end and achieve stateof-the-art ASR results on both datasets. We further experiment with different network architectures for spectral mask estimation: One small feed-forward network with only one hidden layer, one Convolutional Neural Network and one bi-directional Long Short-Term Memory network, showing that even a small network is capable of delivering significant performance improvements.
The Graph Traversal Pattern
A graph is a structure composed of a set of vertices (i.e. nodes, dots) connected to one another by a set of edges (i.e. links, lines). The concept of a graph has been around since the late 19 century, however, only in recent decades has there been a strong resurgence in both theoretical and applied graph research in mathematics, physics, and computer science. In applied computing, since the late 1960s, the interlinked table structure of the relational database has been the predominant information storage and retrieval model. With the growth of graph/network-based data and the need to efficiently process such data, new data management systems have been developed. In contrast to the index-intensive, set-theoretic operations of relational databases, graph databases make use of index-free, local traversals. This article discusses the graph traversal pattern and its use in computing.
Ionizing radiation risks of cardiac imaging: estimates of the immeasurable.
For many decades, the search for a non-invasive visualization of the coronary arteries seemed to remain an unfulfilled promise to clinical cardiologists. Owing to the rapid refinements successfully implemented in computed tomography (CT) technology over the past few years, non-invasive imaging of coronary arteries is now not only feasible but also has become a reality in daily routine. This may—at least in part—have contributed to the fact that the number of CT scans performed in the USA has quadrupled since 1993. Although in a recent US survey CT and nuclear imaging accounted for just 21% of the total number of procedures, they resulted in .75% of the total cumulative effective radiation dose. We have witnessed an impressive six-fold increase in the radiation dose from medical imaging delivered per patient over the last 3 decades. –2 Interestingly, half of all nuclear medicine procedures worldwide and 25% of all X-ray studies are performed in the USA (constituting 5% of the world’s population), doubling and tripling that of other developed countries. In this context, it appears appropriate that the radiation exposure experienced by patients undergoing any medical imaging procedure has recently obtained a growing attention and publicity. Although some surveys have investigated on the overall amount of radiation exposure (Table 1) from any medical imaging procedure, others have focused specifically on the radiation dose to patients from cardiac imaging. 7 Among these, CT coronary angiography has faced the greatest attention, probably because this modern development has been introduced as a last cardiac imaging technique and also because CT is generally perceived as being associated with a high radiation dose to the patient. In fact, in its infancies, radiation doses .20 mSv were reported for a CT coronary angiography. Although comparable doses have also been reported from some surveys for purely diagnostic coronary catheterization, which is invasive and achieves only low diagnostic yield in the actual daily clinical routine, this has fuelled a vivid discussion on the potential harms arising from non-invasive CT coronary angiography, questioning the justification of its use in large populations and calling for more efficient radiation protection measures of patients undergoing CT angiography. Remarkably, whereas the potential benefits of medical imaging procedures are generally left unmentioned in the radiation safety discussion although they can be scientifically quantified, the risk of cancer from low radiation doses used in medical imaging can only be roughly estimated by statistical calculations based on assumptions of the linear no-threshold theory. This means that data from Hiroshima are extrapolated down to the lowest doses, although no studies have ever verified the assumptions about cancer associated with the doses used in medical imaging. Instead, even the authors of the largest recent survey on low-dose ionizing radiation from medical imaging procedures have agreed that the data associating low-dose radiation to cancer risk are not definitive. Similarly, the Health Physics Society has concluded in a position statement that although there is substantial and convincing scientific evidence for health risks following high-dose exposures, risks of health effects for doses ,50–100 mSv are ‘either too small to be observed or are nonexistent’. Nevertheless, following the principle of keeping radiation exposure as low as reasonably achievable, several strategies to reduce radiation dose in CT coronary angiography have been explored, such as automated exposure control, electrocardiographically controlled tube modulation, and reduced tube voltage (from 120 to 100 kV) in non-obese patients. A prospective controlled multicentre trial has confirmed that introduction of a collaborative radiation dose-reduction programme was associated with a 53% reduction in radiation dose from 21 to 10 mSv in patients undergoing CT coronary angiography. A recent milestone in
Multi-stream Processing for Noise Robust Speech Recognition
In this thesis, the framework of multi-stream combination has been explored to improve the noise robustness of automatic speech recognition (ASR) systems. The central idea of multi-stream ASR is to combine information from several sources to improve the performance of a system. The two important issues of multi-stream systems are which information sources (feature representations) to combine and what importance (weights) be given to each information source. In the framework of hybrid hidden Markov model/artificial neural network (HMM/ANN) and Tandem systems, several weighting strategies are investigated in this thesis to merge the posterior outputs of multi-layered perceptrons (MLPs) trained on different feature representations. The best results were obtained by inverse entropy weighting in which the posterior estimates at the output of the MLPs were weighted by their respective inverse output entropies. In the second part of this thesis, two feature representations have been investigated, namely pitch frequency and spectral entropy features. The pitch frequency feature is used along with perceptual linear prediction (PLP) features in a multi-stream framework. The second feature proposed in this thesis is estimated by applying an entropy function to the normalized spectrum to produce a measure which has been termed spectral entropy. The idea of the spectral entropy feature is extended to multi-band spectral entropy features by dividing the normalized full-band spectrum into sub-bands and estimating the spectral entropy of each sub-band. The proposed multi-band spectral entropy features were observed to be robust in high noise conditions. Subsequently, the idea of embedded training is extended to multi-stream HMM/ANN systems. To evaluate the maximum performance that can be achieved by frame-level weighting, we investigated an “oracle test”. We also studied the relationship of oracle selection to inverse entropy weighting and proposed an alternative interpretation of the oracle test to analyze the complementarity of streams in multi-stream systems. The techniques investigated in this work gave a significant improvement in performance for clean as well as noisy test conditions.
Internet of Things : Features , Challenges , and Vulnerabilities Authors
The terminology Internet of Things (IoT) refers to a future where every day physical objects are connected by the Internet in one form or the other, but outside the traditional desktop realm. The successful emergence of the IoT vision, however, will require computing to extend past traditional scenarios involving portables and smart-phones to the connection of everyday physical objects and the integration of intelligence with the environment. Subsequently, this will lead to the development of new computing features and challenges. The main purpose of this paper, therefore, is to investigate the features, challenges, and weaknesses that will come about, as the IoT becomes reality with the connection of more and more physical objects. Specifically, the study seeks to assess emergent challenges due to denial of service attacks, eavesdropping, node capture in the IoT infrastructure, and physical security of the sensors. We conducted a literature review about IoT, their features, challenges, and vulnerabilities. The methodology paradigm used was qualitative in nature with an exploratory research design, while data was collected using the desk research method. We found that, in the distributed form of architecture in IoT, attackers could hijack unsecured network devices converting them into bots to attack third parties. Moreover, attackers could target communication channels and extract data from the information flow. Finally, the perceptual layer in distributed IoT architecture is also found to be vulnerable to node capture attacks, including physical capture, brute force attack, DDoS attacks, and node privacy leaks.
Morpho-syntactic Lexical Generalization for CCG Semantic Parsing
In this paper, we demonstrate that significant performance gains can be achieved in CCG semantic parsing by introducing a linguistically motivated grammar induction scheme. We present a new morpho-syntactic factored lexicon that models systematic variations in morphology, syntax, and semantics across word classes. The grammar uses domain-independent facts about the English language to restrict the number of incorrect parses that must be considered, thereby enabling effective learning from less data. Experiments in benchmark domains match previous models with one quarter of the data and provide new state-of-the-art results with all available data, including up to 45% relative test-error reduction.
900 MHz and 2.45 GHz compact dual-band circularly-polarized patch antenna for RFID application
A compact design of a circularly-polarized microstrip antenna in order to achieve dual-band behavior for Radio Frequency Identification (RFID) applications is presented, defected ground structure (DGS) technique is used to miniaturize and get a dual-band antenna, the entire size is 38×40×1.58 mm3. This antenna was designed to cover both ultra-height frequency (740MHz ~ 1GHz) and slow height frequency (2.35 GHz ~ 2.51GHz), return loss <; -10 dB, the 3-dB axial ratio bandwidths are about 110 MHz at the lower band (900 MHz).
Chymotrypsin Adsorption on Montmorillonite: Enzymatic Activity and Kinetic FTIR Structural Analysis.
Soils have a large solid surface area and high adsorptive capacities. To determine if structural and solvation changes induced by adsorption on clays are related to changes in enzyme activity, alpha-chymotrypsin adsorbed on a phyllosilicate with an electronegative surface (montmorillonite) has been studied by transmission FTIR spectroscopy. A comparison of the pH-dependent structural changes for the solution and adsorbed states probes the electrostatic origin of the adsorption. In the pD range 4.5-10, adsorption only perturbs some peripheral domains of the protein compared to the solution. Secondary structure unfolding affects about 15-20 peptide units. Parts of these domains become hydrated and others entail some self-association. However, the inactivation of the catalytic activity of the adsorbed enzyme in the 5-7 pD range is due less to these structural changes than to steric hindrance when three essential imino/amino functions, located close to the entrance of the catalytic cavity (His-40 and -57 residues and Ala-149 end chain residue), are oriented toward the negatively charged mineral surface. When these functions lose their positive charge, the orientation of the adsorbed enzyme is changed and an activity similar to that in solution at equivalent pH is recovered. This result is of fundamental interest in all fields of research where enzymatic activity is monitored using reversible adsorption procedures. Copyright 1999 Academic Press.
"This Post Will Just Get Taken Down": Characterizing Removed Pro-Eating Disorder Social Media Content
Social media sites like Facebook and Instagram remove content that is against community guidelines or is perceived to be deviant behavior. Users also delete their own content that they feel is not appropriate within personal or community norms. In this paper, we examine characteristics of over 30,000 pro-eating disorder (pro-ED) posts that were at one point public on Instagram but have since been removed. Our work shows that straightforward signals can be found in deleted content that distinguish them from other posts, and that the implications of such classification are immense. We build a classifier that compares public pro-ED posts with this removed content that achieves moderate accuracy of 69%. We also analyze the characteristics in content in each of these post categories and find that removed content reflects more dangerous actions, self-harm tendencies, and vulnerability than posts that remain public. Our work provides early insights into content removal in a sensitive community and addresses the future research implications of the findings.
Gaussian Process Regression Networks
We introduce a new regression framework, Gaussian process regression networks (GPRN), which combines the structural properties of Bayesian neural networks with the nonparametric flexibility of Gaussian processes. GPRN accommodates input (predictor) dependent signal and noise correlations between multiple output (response) variables, input dependent length-scales and amplitudes, and heavy-tailed predictive distributions. We derive both elliptical slice sampling and variational Bayes inference procedures for GPRN. We apply GPRN as a multiple output regression and multivariate volatility model, demonstrating substantially improved performance over eight popular multiple output (multi-task) Gaussian process models and three multivariate volatility models on real datasets, including a 1000 dimensional gene expression dataset.
Towards smart city: M2M communications with software agent intelligence
Recent advances in the fields of wireless technology have exhibited a strong potential and tendency on improving human life by means of ubiquitous communications devices that enable smart, distributed services. In fact, traditional human to human (H2H) communications are gradually falling behind the scale of necessity. Consequently, machine to machine (M2M) communications have surpassed H2H, thus drawing significant interest from industry and the research community recently. This paper first presents a four-layer architecture for internet of things (IoT). Based on this architecture, we employ the second generation RFID technology to propose a novel intelligent system for M2M communications.
Using Expectation-Maximization for Reinforcement Learning
We discuss Hinton's (1989) relative payoff procedure (RPP), a static reinforcement learning algorithm whose foundation is not stochastic gradient ascent. We show circumstances under which applying the RPP is guaranteed to increase the mean return, even though it can make large changes in the values of the parameters. The proof is based on a mapping between the RPP and a form of the expectation-maximization procedure of Dempster, Laird, and Rubin (1977).
Astronomical Tests of Relativity: Beyond Parameterized Post-Newtonian Formalism (PPN), to Testing Fundamental Principles
By the early 1970s, the improved accuracy of astrometric and time measurements enabled researchers not only to experimentally compare relativistic gravity with the Newtonian predictions, but also to compare difierent relativistic gravitational theories (e.g., the Brans-Dicke Scalar-Tensor Theory of Gravitation). For this comparison, Kip Thorne and others developed the Parameterized Post-Newtonian Formalism (PPN), and derived the dependence of difierent astronomically observable efiects on the values of the corresponding parameters. Since then, all the observations have conflrmed General Relativity. In other words, the question of which relativistic gravitation theory is in the best accordance with the experiments has been largely settled. This does not mean that General Relativity is the flnal theory of gravitation: it needs to be reconciled with quantum physics (into quantum gravity), it may also need to be reconciled with numerous surprising cosmological observations, etc. It is, therefore, reasonable to prepare an extended version of the PPN formalism, that will enable us to test possible quantum-related modiflcations of General Relativity. In particular, we need to include the possibility of violating fundamental principles that un- derlie the PPN formalism but that may be violated in quantum physics, such as scale-invariance, T-invariance, P-invariance, energy conservation, spatial isotropy violations, etc. In this paper, we present the flrst attempt to design the corresponding extended PPN formalism, with the (partial) analysis of the relation between the corresponding fundamental physical principles.
Dietary n-6 and n-3 polyunsaturated fatty acids: from biochemistry to clinical implications in cardiovascular prevention.
Linoleic acid (LA) and alpha linolenic acid (ALA) belong to the n-6 (omega-6) and n-3 (omega-3) series of polyunsaturated fatty acids (PUFA), respectively. They are defined "essential" fatty acids since they are not synthesized in the human body and are mostly obtained from the diet. Food sources of ALA and LA are most vegetable oils, cereals and walnuts. This review critically revises the most significant epidemiological and interventional studies on the cardioprotective activity of PUFAs, linking their biological functions to biochemistry and metabolism. In fact, a complex series of desaturation and elongation reactions acting in concert transform LA and ALA to their higher unsaturated derivatives: arachidonic acid (AA) from LA, eicosapentaenoic (EPA) and docosahexaenoic acids (DHA) from ALA. EPA and DHA are abundantly present in fish and fish oil. AA and EPA are precursors of different classes of pro-inflammatory or anti-inflammatory eicosanoids, respectively, whose biological activities have been evoked to justify risks and benefits of PUFA consumption. The controversial origin and clinical role of the n-6/n-3 ratio as a potential risk factor in cardiovascular diseases is also examined. This review highlights the important cardioprotective effect of n-3 in the secondary prevention of sudden cardiac death due to arrhythmias, but suggests caution to recommend dietary supplementation of PUFAs to the general population, without considering, at the individual level, the intake of total energy and fats.
A ?5-V CMOS analog multiplier
A four-quadrant CMOS analog multiplier is presented. The device is nominafly biased with +5-V supplies, has identicaf full-scafe single-ended x and y inputs of +4 V, and exhibits less than 0.5 percent Manuscript received March 1, 1987; revised July 18, 1987. The authors are with the Department of Electrical Engineering, Texas A&M University, College Station, TX 77843-3128. IEEE Log Number 8716852. + I–— ————_—
Box in the Box: Joint 3D Layout and Object Reasoning from Single Images
In this paper we propose an approach to jointly infer the room layout as well as the objects present in the scene. Towards this goal, we propose a branch and bound algorithm which is guaranteed to retrieve the global optimum of the joint problem. The main difficulty resides in taking into account occlusion in order to not over-count the evidence. We introduce a new decomposition method, which generalizes integral geometry to triangular shapes, and allows us to bound the different terms in constant time. We exploit both geometric cues and object detectors as image features and show large improvements in 2D and 3D object detection over state-of-the-art deformable part-based models.
Predicting employee expertise for talent management in the enterprise
Strategic planning and talent management in large enterprises composed of knowledge workers requires complete, accurate, and up-to-date representation of the expertise of employees in a form that integrates with business processes. Like other similar organizations operating in dynamic environments, the IBM Corporation strives to maintain such current and correct information, specifically assessments of employees against job roles and skill sets from its expertise taxonomy. In this work, we deploy an analytics-driven solution that infers the expertise of employees through the mining of enterprise and social data that is not specifically generated and collected for expertise inference. We consider job role and specialty prediction and pose them as supervised classification problems. We evaluate a large number of feature sets, predictive models and postprocessing algorithms, and choose a combination for deployment. This expertise analytics system has been deployed for key employee population segments, yielding large reductions in manual effort and the ability to continually and consistently serve up-to-date and accurate data for several business functions. This expertise management system is in the process of being deployed throughout the corporation.
Compact triple C shaped microstrip patch antenna for WLAN, WiMAX & Wi-Fi application at 2.5 GHz
In the rapid progress of commercial communication applications, the development of compact antenna has an important role. This paper presents the analysis on the performance of single band compact triple C shaped microstrip patch antenna for WLAN, WiMAX and Wi-Fi applications with center frequency at 2.5GHz. The microstrip antenna has a planar geometry and consists of a ground plane, a substrate, a compact patch as radiator. Different specifications of the proposed antenna are measured through computer simulation in free space. The antenna has an impedance bandwidth of 2500 MHz (2487MHz to 2512 MHz). The proposed antenna provides excellent SWR of 1.04 and return loss -32.80 dB while excited by a 50 O microstrip feed line. All simulation results are performed using the CST Microwave Studio.
Nonanimal stabilized hyaluronic acid for lip augmentation and facial rhytid ablation.
OBJECTIVE To evaluate the effectiveness of nonanimal stabilized hyaluronic acid as an injectable filling agent. DESIGN Nonrandomized, retrospective, interventional case series. RESULTS A total of 1446 consecutive patients (1029 women and 417 men) underwent intradermal injection of commercially available nonanimal stabilized hyaluronic acid (2242 treatments) for the enhancement of lip volume and contour and the reduction of visible facial rhytids. Almost 61% of all patients remained satisfied with their results after 9 months. The effect was longest in the glabellar and nasolabial fold areas. Minimal transient sequelae were noted. CONCLUSIONS Nonanimal stabilized hyaluronic acid is an effective and safe facial soft tissue expander. Its duration varies with each facial area treated.
Studies in Scientometrics I Transience and Continuance in Scientific Authorship
Investigation of the transience / continuance phenomenon occurring at a research front. The annual output of authors in a random sample derived from seven years data from Science Citation Index and Who is publishing in Science was analysed. In the whole period (1964-1970) there are 281 transient authors and 19 continuant authors, for a total population of 506 authors. By deriving a quantitative model for the author flow pattern analysis, it was shown that there is a birth rate (annual recruitment) and a death rate (annual termination) which overlap to give an infant mortality (transience). By refining the model, it was possible to define a core of continuant authors, which amounts to 20% of those publishing. The transient authors constitute 22% of the annual population and 2/3 of the newcomers to publication. The other identified components of a scientific community are the recruits, terminators, non-core publishing continuants and non-publishing continuants. These demographic properties are clearly associated with the lowest and highest rates of authors' productivity, the distributions of which folkw Lotka's and Price's laws with great regularity. Thus it was possible to derive a lifetime expectancy at the research front which will be proportional to the time of active publication. On one end of the scale there is a majority of authors with a minimum life expectancy and a low average productivity (75% of the authors produce 25% of the papers); at the opposite extreme there are authors in the permanent nucleus (20%) with less average mortality and greater average productivity (more than half of the papers) All this is a result of the positive feedback or Mathew Principle in scientific publication. This situation seems so intrinsic that it must be regarded at the way in which society has adjusted its institutional structure to fit the cloth of scientific productivity and demography. Many of the richest areas for research in the sociology of science depend upon some understanding of what may be called the actuarial statistics of the scientific community. One needs to know the dynamical processes which govern emergence, survival and disappearance within that community. These determine the structure of the group by age, status, productivity, reputation and professional ties. Such studies have many of the same strengths and limitations as acturial methods in demography and life insurance. Useful calculations may be made about the population in the large, but the bearing of the life of any individual remains statistical rather than causal. The purpose of this investigation is to uncover the facts and regularities which will require some theoretical explanation. Undoubtedly the most important phenomenon, hitherto not well recognized, is that at any given time a large number of those working at the research front are transients_whose_names have never appeared before and will not appear again in the record. The point has obvious application to the natural history of scientific careers, and it is also of fundamental importance to the analysis of manpower data in the sciences, since only part of the research labor force can be considered as stable. Previous work in this areas has usually been based upon hand or machine counts that have been limited to a single nation, a scientific specialty, or just one journal or scientific institution. The results have always been of questionable generality because of possible strong idiosyncracies of these special groups and also because of the large general movement that exists across the boundaries of such groups as people change jobs and migrate through fields. We have been fortunate in having at our disposal data emerging as a by-product from the machine handling of a uniquely comprehensive and worldwide coverage of the literature in all fields of basic and applied science. For this reason the results are relatively free of local idiosyncracies and are of general applicability to the scientific community. Ci. Inf., Rio de Janeiro, 4(1):27-40, 1975 27 DEREK DE SOLLA PRICE & SUHA GÜRSEY The data bank for this study was based upon volumes published by the Institute for Scientific Information, including several years of output of the Science Citation Index, with its indexes of Source Authors and Cited Authors, and the annual volumes of Who is Publishing, in Science which is derived from the weekly editions of Current
CheXNet: Radiologist-Level Pneumonia Detection on Chest X-Rays with Deep Learning
We develop an algorithm that can detect pneumonia from chest X-rays at a level exceeding practicing radiologists. Our algorithm, CheXNet, is a 121-layer convolutional neural network trained on ChestX-ray14, currently the largest publicly available chest Xray dataset, containing over 100,000 frontalview X-ray images with 14 diseases. Four practicing academic radiologists annotate a test set, on which we compare the performance of CheXNet to that of radiologists. We find that CheXNet exceeds average radiologist performance on the F1 metric. We extend CheXNet to detect all 14 diseases in ChestX-ray14 and achieve state of the art results on all 14 diseases.
The importance of educational theories for facilitating learning when using technology in medical education.
BACKGROUND There is an increasing use of technology for teaching and learning in medical education but often the use of educational theory to inform the design is not made explicit. The educational theories, both normative and descriptive, used by medical educators determine how the technology is intended to facilitate learning and may explain why some interventions with technology may be less effective compared with others. AIMS The aim of this study is to highlight the importance of medical educators making explicit the educational theories that inform their design of interventions using technology. METHOD The use of illustrative examples of the main educational theories to demonstrate the importance of theories informing the design of interventions using technology. RESULTS Highlights the use of educational theories for theory-based and realistic evaluations of the use of technology in medical education. CONCLUSION An explicit description of the educational theories used to inform the design of an intervention with technology can provide potentially useful insights into why some interventions with technology are more effective than others. An explicit description is also an important aspect of the scholarship of using technology in medical education.
Backprop KF: Learning Discriminative Deterministic State Estimators
Generative state estimators based on probabilistic filters and smoothers are one of the most popular classes of state estimators for robots and autonomous vehicles. However, generative models have limited capacity to handle rich sensory observations, such as camera images, since they must model the entire distribution over sensor readings. Discriminative models do not suffer from this limitation, but are typically more complex to train as latent variable models for state estimation. We present an alternative approach where the parameters of the latent state distribution are directly optimized as a deterministic computation graph, resulting in a simple and effective gradient descent algorithm for training discriminative state estimators. We show that this procedure can be used to train state estimators that use complex input, such as raw camera images, which must be processed using expressive nonlinear function approximators such as convolutional neural networks. Our model can be viewed as a type of recurrent neural network, and the connection to probabilistic filtering allows us to design a network architecture that is particularly well suited for state estimation. We evaluate our approach on tracking task with raw image inputs. The results show significant improvement over both standard generative approaches and regular recurrent neural networks.
Relationships of personality, affect, emotional intelligence and coping with student stress and academic success: Different patterns of association for stress and success
a r t i c l e i n f o The associations of personality, affect, trait emotional intelligence (EI) and coping style measured at the start of the academic year with later academic performance were examined in a group of undergraduate students at the University of Edinburgh. The associations of the dispositional and affect measures with concurrent stress and life satisfaction were also examined. The survey was completed by 238 students, of whom 163 gave permission for their end-of-year marks to be accessed. Complete data for modelling stress and academic success were available for 216 and 156 students respectively. The associations of academic success and stress differed, and high stress was not a risk factor for poor academic performance. Further analyses were based on the extraction of three composite factors (Emotional Regulation, Avoidance and Task Focus) from the EI and coping subscales. Structural equation modelling showed that academic performance was predicted by Conscientiousness, Agreeableness, positive affect and the Task Focus factor. Modelling for stress and life satisfaction showed relationships with personality, affect, and the Task Focus and Emotion Regulation factors. The Task Focus factor played a mediating role in both models, and the Emotion Regulation factor acted as a mediator in the model for stress and life satisfaction. The theoretical interpretation of these results, and their potential applications in interventions targeting at-risk students, are discussed. The emotions which students experience within the learning environment are known to be related to important outcomes such as academic success and academic adjustment, and also to student health and well-being. The specific topic of test anxiety and its effect on academic performance has been widely studied (e.g. Zeidner, 1995, 1996). Studies of other correlates of negative emotions have established associations with stress in students (Austin, Saklofske, & Mastoras, 2010) and with poorer academic adjustment (Halamandaris & Power, 1997). The role of positive emotions in educational contexts has been less widely researched but associations have been found with academic performance and In the context of studying student emotions, it is also appropriate to examine the potential utility of emotional intelligence (EI) as an explanatory variable. Models of EI highlight a range of emotion-related capabilities; a component of EI which appears to be particularly likely to support students in the learning environment is Emotion Regulation, since individuals who can regulate their emotions well are better able to manage stress. Other emotional capabilities such as being …
An Interpolating Digitally Controlled Oscillator for a Wide-Range All-Digital PLL
A digitally controlled oscillator (DCO) for the all-digital phase-locked loop (ADPLL) with both the wide frequency range and the high maximum frequency was proposed by using the interpolation scheme at both the coarse and fine delay blocks of the DCO. The coarse block consists of two ladder-shaped coarse delay chains. The delay of the first one is an odd multiple of an inverter delay and that of the second one is an even multiple. An interpolation operation is performed at the second coarse delay chain, which reduces both the resolution of the coarse delay block and the delay range of the fine block to half. This increases the maximum output frequency of the DCO while it maintains the wide frequency range. The ADPLL with the proposed DCO was fabricated in a 0.18 mum CMOS process with the active area of 0.32 mm2 . The measured output frequency of the ADPLL ranges from 33 to 1040 MHz at the supply of 1.8 V. The measured rms and peak-to-peak jitters are 13.8 ps and 86.7 ps, respectively, at the output frequency of 950 MHz. The power consumption is 15.7 mW.
Ultra-Low-Power Cascaded CMOS LNA With Positive Feedback and Bias Optimization
<?Pub Dtl?>A novel circuit topology for a CMOS low-noise amplifier (LNA) is presented in this paper. By employing a positive feedback technique at the common-source transistor of the cascade stage, the voltage gain can be enhanced. In addition, with the MOS transistors biased in the moderate inversion region, the proposed LNA circuit is well suited to operate at reduced power consumption and supply voltage conditions. Utilizing a standard <formula formulatype="inline"> <tex Notation="TeX">${\hbox {0.18-}}\mu{\hbox {m}}$</tex></formula> CMOS process, the CMOS LNA has been demonstrated for 5-GHz frequency band applications. Operated at a supply voltage of 0.6 V, the LNA with the gain-boosting technique achieves a gain of 13.92 dB and a noise figure of 3.32 dB while consuming a dc power of <formula formulatype="inline"> <tex Notation="TeX">${\hbox {834 }} \mu{\hbox{W}}$</tex></formula>. The measured <formula formulatype="inline"><tex Notation="TeX">${ P}_{{\rm 1}{\hbox{-}}{\rm {dB}}}$</tex> </formula> and input third-order intercept point are <formula formulatype="inline"> <tex Notation="TeX">$-{{\hbox {22.2}}}$</tex></formula> and <formula formulatype="inline"> <tex Notation="TeX">$-{{\hbox {11.5 dBm}}}$</tex></formula>, respectively
Is trauma a causal agent of psychopathologic symptoms in posttraumatic stress disorder? Findings from identical twins discordant for combat exposure.
OBJECTIVE The diagnosis of posttraumatic stress disorder (PTSD) is unique in that its criteria are embedded with a presumed causal agent, viz, a traumatic event. This assumption has come under scrutiny as a number of recent studies have suggested that many symptoms of PTSD may not necessarily be the result of trauma and may merely represent general psychiatric symptoms that would have existed even in the absence of a trauma event but are subsequently misattributed to it. The current study tests this hypothesis. METHOD A case-control twin study conducted between 1996-2001 examined psychopathologic symptoms in a national convenience sample of 104 identical twin pairs discordant for combat exposure in Vietnam, with (n = 50) or without (n = 54) combat-related PTSD (DSM-IV-diagnosed) in the exposed twin. Psychometric measures used were the Symptom Checklist-90-Revised, the Clinician-Administered PTSD Scale, and the Mississippi Scale for Combat-Related PTSD. If a psychopathologic feature represents a factor that would have existed even without traumatic exposure, then there is a high chance that it would also be found at elevated rates in the non-trauma-exposed, identical cotwins of trauma-exposed twins with PTSD. In contrast, if a psychopathologic feature is acquired as a result of an environmental factor unique to the exposed twin, eg, the traumatic event, their cotwins should not have an increased incidence of the feature. RESULTS Combat veterans with PTSD demonstrated significantly higher scores (P < .0001) on the Symptom Checklist-90-Revised and other psychometric measures of psychopathology than their own combat-unexposed cotwins (and than combat veterans without PTSD and their cotwins). CONCLUSIONS These results support the conclusion that the majority of psychiatric symptoms reported by combat veterans with PTSD would not have been present were it not for their exposure to traumatic events.
AMC: AutoML for Model Compression and Acceleration on Mobile Devices
Model compression is a critical technique to efficiently deploy neural network models on mobile devices which have limited computation resources and tight power budgets. Conventional model compression techniques rely on hand-crafted heuristics and rule-based policies that require domain experts to explore the large design space trading off among model size, speed, and accuracy, which is usually sub-optimal and time-consuming. In this paper, we propose AutoML for Model Compression (AMC) which leverage reinforcement learning to provide the model compression policy. This learning-based compression policy outperforms conventional rule-based compression policy by having higher compression ratio, better preserving the accuracy and freeing human labor. Under 4× FLOPs reduction, we achieved 2.7% better accuracy than the handcrafted model compression policy for VGG-16 on ImageNet. We applied this automated, push-the-button compression pipeline to MobileNet and achieved 1.81× speedup of measured inference latency on an Android phone and 1.43× speedup on the Titan XP GPU, with only 0.1% loss of ImageNet Top-1 accuracy.
Dimensioning the power supply of a LTE macro BS connected to a PV panel and the power grid
The use of solar energy to power base stations of cellular networks is becoming increasingly interesting, in both areas where the power grid is not present or not reliable, and where the power grid is ubiquitous and reliable, but energy costs keep growing. In this paper, we investigate the dimensioning of the photovoltaic panel and energy storage of a hybrid base station powering system that can exploit both solar and grid energy. The objective of the dimensioning is the minimization of the total capital and operational expenditures over a period of 10 years, accounting for the evolution of technology and traffic load. Results show that in a south European city like Torino, a hybrid base station powering system allows significant cost and size reductions, with respect to the case of solar energy only (and of a diesel power generator), and roughly equals the cost of the grid-only case in 8-9 years. When the extra energy produced by the solar panel can be sold back to the grid, the hybrid systems allow significant savings with respect to the grid-only case. For the city of Aswan, with a production that is much higher than in Torino and more constant over the year, costs of pure solar and hybrid systems are significantly lower in absolute terms; hybrid systems result to be still advantageous with respect to pure solar systems.
The Transition According to Cambridge, Mass.
This paper reviews the NBER's "The Transition in Eastern Europe." This book's 18 essays examine privatization, stabilization, fiscal policies, the nascent private sector, bankruptcy, foreign trade, and investment, etc. Individual country performance occupies six studies. These essays, which are of high quality, provide a cross-section of the literature on transition. However, the book's stronger conclusions are not supported by strong evidence. Two conclusions, unemphasized by contributors, emerge. Expectations, which reflect the theories used to design standard reforms, are often unfulfilled during reforms. A plausible explanation for the misplaced expectations is the ahistorical approach of those theories.
Assessment of Clinical Criteria for Sepsis: For the Third International Consensus Definitions for Sepsis and Septic Shock (Sepsis-3).
IMPORTANCE The Third International Consensus Definitions Task Force defined sepsis as "life-threatening organ dysfunction due to a dysregulated host response to infection." The performance of clinical criteria for this sepsis definition is unknown. OBJECTIVE To evaluate the validity of clinical criteria to identify patients with suspected infection who are at risk of sepsis. DESIGN, SETTINGS, AND POPULATION Among 1.3 million electronic health record encounters from January 1, 2010, to December 31, 2012, at 12 hospitals in southwestern Pennsylvania, we identified those with suspected infection in whom to compare criteria. Confirmatory analyses were performed in 4 data sets of 706,399 out-of-hospital and hospital encounters at 165 US and non-US hospitals ranging from January 1, 2008, until December 31, 2013. EXPOSURES Sequential [Sepsis-related] Organ Failure Assessment (SOFA) score, systemic inflammatory response syndrome (SIRS) criteria, Logistic Organ Dysfunction System (LODS) score, and a new model derived using multivariable logistic regression in a split sample, the quick Sequential [Sepsis-related] Organ Failure Assessment (qSOFA) score (range, 0-3 points, with 1 point each for systolic hypotension [≤100 mm Hg], tachypnea [≥22/min], or altered mentation). MAIN OUTCOMES AND MEASURES For construct validity, pairwise agreement was assessed. For predictive validity, the discrimination for outcomes (primary: in-hospital mortality; secondary: in-hospital mortality or intensive care unit [ICU] length of stay ≥3 days) more common in sepsis than uncomplicated infection was determined. Results were expressed as the fold change in outcome over deciles of baseline risk of death and area under the receiver operating characteristic curve (AUROC). RESULTS In the primary cohort, 148,907 encounters had suspected infection (n = 74,453 derivation; n = 74,454 validation), of whom 6347 (4%) died. Among ICU encounters in the validation cohort (n = 7932 with suspected infection, of whom 1289 [16%] died), the predictive validity for in-hospital mortality was lower for SIRS (AUROC = 0.64; 95% CI, 0.62-0.66) and qSOFA (AUROC = 0.66; 95% CI, 0.64-0.68) vs SOFA (AUROC = 0.74; 95% CI, 0.73-0.76; P < .001 for both) or LODS (AUROC = 0.75; 95% CI, 0.73-0.76; P < .001 for both). Among non-ICU encounters in the validation cohort (n = 66 522 with suspected infection, of whom 1886 [3%] died), qSOFA had predictive validity (AUROC = 0.81; 95% CI, 0.80-0.82) that was greater than SOFA (AUROC = 0.79; 95% CI, 0.78-0.80; P < .001) and SIRS (AUROC = 0.76; 95% CI, 0.75-0.77; P < .001). Relative to qSOFA scores lower than 2, encounters with qSOFA scores of 2 or higher had a 3- to 14-fold increase in hospital mortality across baseline risk deciles. Findings were similar in external data sets and for the secondary outcome. CONCLUSIONS AND RELEVANCE Among ICU encounters with suspected infection, the predictive validity for in-hospital mortality of SOFA was not significantly different than the more complex LODS but was statistically greater than SIRS and qSOFA, supporting its use in clinical criteria for sepsis. Among encounters with suspected infection outside of the ICU, the predictive validity for in-hospital mortality of qSOFA was statistically greater than SOFA and SIRS, supporting its use as a prompt to consider possible sepsis.
Extracting and Ranking Product Features in Opinion Documents
An important task of opinion mining is to extract people’s opinions on features of an entity. For example, the sentence, “I love the GPS function of Motorola Droid” expresses a positive opinion on the “GPS function” of the Motorola phone. “GPS function” is the feature. This paper focuses on mining features. Double propagation is a state-of-the-art technique for solving the problem. It works well for medium-size corpora. However, for large and small corpora, it can result in low precision and low recall. To deal with these two problems, two improvements based on part-whole and “no” patterns are introduced to increase the recall. Then feature ranking is applied to the extracted feature candidates to improve the precision of the top-ranked candidates. We rank feature candidates by feature importance which is determined by two factors: feature relevance and feature frequency. The problem is formulated as a bipartite graph and the well-known web page ranking algorithm HITS is used to find important features and rank them high. Experiments on diverse real-life datasets show promising results.
Humanoid Vertical Jumping based on Force Feedback and Inertial Forces Optimization
This paper proposes adapting human jumping dynamics to humanoid robotic structures. Data obtained from human jumping phases and decomposition together with ground reaction forces (GRF) are used as model references. Moreover, bodies inertial forces are used as task constraints while optimizing energy to determine the humanoid robot posture and improve its jumping performances.
The effects of mobile phone use on pedestrian crossing behaviour at signalized and unsignalized intersections.
Research amongst drivers suggests that pedestrians using mobile telephones may behave riskily while crossing the road, and casual observation suggests concerning levels of pedestrian mobile-use. An observational field survey of 270 females and 276 males was conducted to compare the safety of crossing behaviours for pedestrians using, versus not using, a mobile phone. Amongst females, pedestrians who crossed while talking on a mobile phone crossed more slowly, and were less likely to look at traffic before starting to cross, to wait for traffic to stop, or to look at traffic while crossing, compared to matched controls. For males, pedestrians who crossed while talking on a mobile phone crossed more slowly at unsignalized crossings. These effects suggest that talking on a mobile phone is associated with cognitive distraction that may undermine pedestrian safety. Messages explicitly suggesting techniques for avoiding mobile-use while road crossing may benefit pedestrian safety.
Ovarian cancer prediction in adnexal masses using ultrasound-based logistic regression models: a temporal and external validation study by the IOTA group.
OBJECTIVES The aims of the study were to temporally and externally validate the diagnostic performance of two logistic regression models containing clinical and ultrasound variables in order to estimate the risk of malignancy in adnexal masses, and to compare the results with the subjective interpretation of ultrasound findings carried out by an experienced ultrasound examiner ('subjective assessment'). METHODS Patients with adnexal masses, who were put forward by the 19 centers participating in the study, underwent a standardized transvaginal ultrasound examination by a gynecologist or a radiologist specialized in ultrasonography. The examiner prospectively collected information on clinical and ultrasound variables, and classified each mass as benign or malignant on the basis of subjective evaluation of ultrasound findings. The gold standard was the histology of the mass with local clinicians deciding whether to operate on the basis of ultrasound results and the clinical picture. The models' ability to discriminate between malignant and benign masses was assessed, together with the accuracy of the risk estimates. RESULTS Of the 1938 patients included in the study, 1396 had benign, 373 had primary invasive, 111 had borderline malignant and 58 had metastatic tumors. On external validation (997 patients from 12 centers), the area under the receiver-operating characteristics curve (AUC) for a model containing 12 predictors (LR1) was 0.956, for a reduced model with six predictors (LR2) was 0.949 and for subjective assessment was 0.949. Subjective assessment gave a positive likelihood ratio of 11.0 and a negative likelihood ratio of 0.14. The corresponding likelihood ratios for a previously derived probability threshold (0.1) were 6.84 and 0.09 for LR1, and 6.36 and 0.10 for LR2. On temporal validation (941 patients from seven centers), the AUCs were 0.945 (LR1), 0.918 (LR2) and 0.959 (subjective assessment). CONCLUSIONS Both models provide excellent discrimination between benign and malignant masses. Because the models provide an objective and reasonably accurate risk estimation, they may improve the management of women with suspected ovarian pathology.
Learning with Random Learning Rates
In neural networks, the learning rate of the gradient descent strongly affects performance. This prevents reliable out-of-the-box training of a model on a new problem. We propose the All Learning Rates At Once (Alrao) algorithm: each unit or feature in the network gets its own learning rate sampled from a random distribution spanning several orders of magnitude, in the hope that enough units will get a close-to-optimal learning rate. Perhaps surprisingly, stochastic gradient descent (SGD) with Alrao performs close to SGD with an optimally tuned learning rate, for various network architectures and problems. In our experiments, all Alrao runs were able to learn well without any tuning.
A polysomnographic study in young psychiatric inpatients: major depression, anorexia nervosa, bulimia nervosa.
The baseline EEG sleep patterns of 10 young depressed patients, 20 patients with anorexia nervosa, 10 patients with bulimia nervosa, and 10 healthy subjects were found to be indistinguishable, except for an increased REM density in the depressed patients. In eating disorder patients, a concomitant major depressive episode had no influence on EEG sleep. The results of the cholinergic REM sleep induction test revealed a significantly faster induction of REM sleep in the depressed patients when compared with the eating disorder patients and the control subjects. This indicates a subthreshold hypersensitivity of the REM sleep triggering cholinergic transmitter system in depressives, but not in eating disorder patients.
Flow of Renyi information in deep neural networks
We propose a rate-distortion based deep neural network (DNN) training algorithm using a smooth matrix functional on the manifold of positive semi-definite matrices as the non-parametric entropy estimator. The objective in the optimization function includes not only the measure of performance of the output layer but also the measure of information distortion between consecutive layers in order to produce a concise representation of its input on each layer. An experiment on speech emotion recognition shows the DNN trained by such method reaches comparable performance with an encoder-decoder system.
LegUp: An open-source high-level synthesis tool for FPGA-based processor/accelerator systems
It is generally accepted that a custom hardware implementation of a set of computations will provide superior speed and energy efficiency relative to a software implementation. However, the cost and difficulty of hardware design is often prohibitive, and consequently, a software approach is used for most applications. In this article, we introduce a new high-level synthesis tool called LegUp that allows software techniques to be used for hardware design. LegUp accepts a standard C program as input and automatically compiles the program to a hybrid architecture containing an FPGA-based MIPS soft processor and custom hardware accelerators that communicate through a standard bus interface. In the hybrid processor/accelerator architecture, program segments that are unsuitable for hardware implementation can execute in software on the processor. LegUp can synthesize most of the C language to hardware, including fixed-sized multidimensional arrays, structs, global variables, and pointer arithmetic. Results show that the tool produces hardware solutions of comparable quality to a commercial high-level synthesis tool. We also give results demonstrating the ability of the tool to explore the hardware/software codesign space by varying the amount of a program that runs in software versus hardware. LegUp, along with a set of benchmark C programs, is open source and freely downloadable, providing a powerful platform that can be leveraged for new research on a wide range of high-level synthesis topics.
CMOS-Compatible Vertical-Silicon-Nanowire Gate-All-Around p-Type Tunneling FETs With $\leq 50$-mV/decade Subthreshold Swing
We present a vertical-silicon-nanowire-based p-type tunneling field-effect transistor (TFET) using CMOS-compatible process flow. Following our recently reported n-TFET , a low-temperature dopant segregation technique was employed on the source side to achieve steep dopant gradient, leading to excellent tunneling performance. The fabricated p-TFET devices demonstrate a subthreshold swing (SS) of 30 mV/decade averaged over a decade of drain current and an Ion/Ioff ratio of >; 105. Moreover, an SS of 50 mV/decade is maintained for three orders of drain current. This demonstration completes the complementary pair of TFETs to implement CMOS-like circuits.
Building Customer-Based Brand Equity : A Blueprint for Creating Strong Brands
and others at both organizations for their support and valuable input. Special thanks to Grey Advertising's Ben Arno who suggested the term brand resonance. Additional thanks to workshop participants at Duke University and Dartmouth College. MSI was established in 1961 as a not-for profit institute with the goal of bringing together business leaders and academics to create knowledge that will improve business performance. The primary mission was to provide intellectual leadership in marketing and its allied fields. Over the years, MSI's global network of scholars from leading graduate schools of management and thought leaders from sponsoring corporations has expanded to encompass multiple business functions and disciplines. Issues of key importance to business performance are identified by the Board of Trustees, which represents MSI corporations and the academic community. MSI supports studies by academics on these issues and disseminates the results through conferences and workshops, as well as through its publications series. This report, prepared with the support of MSI, is being sent to you for your information and review. It is not to be reproduced or published, in any form or by any means, electronic or mechanical, without written permission from the Institute and the author. Building a strong brand has been shown to provide numerous financial rewards to firms, and has become a top priority for many organizations. In this report, author Keller outlines the Customer-Based Brand Equity (CBBE) model to assist management in their brand-building efforts. According to the model, building a strong brand involves four steps: (1) establishing the proper brand identity, that is, establishing breadth and depth of brand awareness, (2) creating the appropriate brand meaning through strong, favorable, and unique brand associations, (3) eliciting positive, accessible brand responses, and (4) forging brand relationships with customers that are characterized by intense, active loyalty. Achieving these four steps, in turn, involves establishing six brand-building blocks—brand salience, brand performance, brand imagery, brand judgments, brand feelings, and brand resonance. The most valuable brand-building block, brand resonance, occurs when all the other brand-building blocks are established. With true brand resonance, customers express a high degree of loyalty to the brand such that they actively seek means to interact with the brand and share their experiences with others. Firms that are able to achieve brand resonance should reap a host of benefits, for example, greater price premiums and more efficient and effective marketing programs. The CBBE model provides a yardstick by …
An Empirical Investigation of Systematic Reviews in Software Engineering
BACKGROUND: Systematic Literature Reviews (SLRs) have gained significant popularity among software engineering (SE) researchers since 2004. Several researchers have also been working on improving the scientific and technological support for SLRs in SE. We argue that there is also an essential need for evidence-based body of knowledge about different aspects of the adoption of SLRs in SE. OBJECTIVE: The main objective of this research is to empirically investigate the adoption and use of SLRs in SE research from various perspectives. METHOD: We used multi-method approach as it is based on a combination of complementary research methods which are expected to compensate each others' limitations. RESULTS: A large majority of the participants are convinced of the value of using a rigorous and systematic methodology for literature reviews. However, there are concerns about the required time and resources for SLRs. One of the most important motivators for performing SLRs is new findings and inception of innovative ideas for further research. The reported SLRs are more influential compared to the traditional literature reviews in terms of number of citations. One of the main challenges of conducting SLRs is drawing a balance between rigor and required effort. CONCLUSIONS: SLR has become a popular research methodology for conducting literature review and evidence aggregation in SE. There is an overall positive perception about this methodology. The findings provide interesting insights into different aspects of SLRs. We expect that the findings can provide valuable information to readers on what can be expected from conducting SLRs and the potential impact of such reviews.
GOSSIPY: A distributed localization system for Internet of Things using RFID technology
The popularity of smart objects in our daily life fosters a new generation of applications under the umbrella of the Internet of Things (IoT). Such applications are built on a distributed network of heterogeneous context-aware devices, where localization is a key issue. The localization problem is further magnified by IoT challenges such as scalability, mobility and the heterogeneity of objects. In existing localization systems using RFID technology, there is a lack of systems that localize mobile tags using heterogeneous mobile readers in a distributed manner. In this paper, we propose the GOSSIPY system for localizing mobile RFID tags using a group of ad hoc heterogeneous mobile RFID readers. The system depends on cooperation of mobile readers through time-constrained interleaving processes. Readers in a neighborhood share interrogation information, estimate tag locations accordingly and employ both proactive and reactive protocols to ensure timely dissemination of location information. We evaluate the proposed system and present its performance through extensive simulation experiments using ns-3.
Enterprise Risk Management Integrated framework for Cloud Computing
Enterprise Risk Management Integrated framework for Cloud Computing Dr. M. Suresh Babu Professor & Head, Department of Computer Applications, Madanapalle Institute of Technology & Science, Madanapalle, AP, India – 517325. A. Mahesh babu Research Scholar, Department of Computer Science & Technology, Sri Krishnadevaraya University, Anantapur – 515003, AP, India. M.Chandra Sekhar Research Scholar, Department of Computer Science & Technology, Sri Krishnadevaraya University, Anantapur – 515003, AP, India. -----------------------------------------------------------------------------ABSTRACT---------------------------------------------------The emergence of cloud computing is a fundamental shift towards new on-demand business models together with new implementation models for the applications portfolio, the infrastructure, and the data, as they are provisioned as virtual services using the cloud. These technological and commercial changes have an impact on current working practices. Businesses need to understand the impact of the new combinations of technology layers, and how they work together. A crucial part of this is analyzing and assessing the risks involved. In the evolution of computing technology, information processing has moved from mainframes to personal computers to server-centric computing to the Web. Today, many organizations are seriously considering adopting cloud computing, the next major milestone in technology and business collaboration. A supercharged version of delivering hosted services over the Internet, cloud computing potentially enables organizations to increase their business model capabilities and their ability to meet computing resource demands while avoiding significant investments in infrastructure, training, personnel, and software. In fall 2010, a Google executive testified before a U.S. congressional subcommittee that more than three million businesses worldwide were customers of its cloud service offerings. As with any new opportunity, cloud computing entails commensurate risks. It brings to organizations a different dimension of collaboration and human interaction, new organizational dependencies, faster resource fulfilment, and new business models.
Quantum Mechanical Metastability
We present a detailed discussion of some features of quantum mechanical metastability. We analyze the nature of decaying (quasistationary) states and the regime of validity of the exponencial law, as well as decays at finite temperature. We resort to very simple systems and elementary techniques to emphasize subtleties sometimes overlooked.