title
stringlengths
8
300
abstract
stringlengths
0
10k
Exploring Models and Data for Image Question Answering
This work aims to address the problem of imagebased question-answering (QA) with new models and datasets. In our work, we propose to use neural networks and visual semantic embeddings, without intermediate stages such as object detection and image segmentation, to predict answers to simple questions about images. Our model performs 1.8 times better than the only published results on an existing image QA dataset. We also present a question generation algorithm that converts image descriptions, which are widely available, into QA form. We used this algorithm to produce an order-of-magnitude larger dataset, with more evenly distributed answers. A suite of baseline results on this new dataset are also presented.
Contrastive Summarization: An Experiment with Consumer Reviews
Contrastive summarization is the problem of jointly generating summaries for two entities in order to highlight their differences. In this paper we present an investigation into contrastive summarization through an implementation and evaluation of a contrastive opinion summarizer in the consumer reviews domain.
A convex optimization approach to smooth trajectories for motion planning with car-like robots
In the recent past, several sampling-based algorithms have been proposed to compute trajectories that are collision-free and dynamically-feasible. However, the outputs of such algorithms are notoriously jagged. In this paper, by focusing on robots with car-like dynamics, we present a fast and simple heuristic algorithm, named Convex Elastic Smoothing (CES) algorithm, for trajectory smoothing and speed optimization. The CES algorithm is inspired by earlier work on elastic band planning and iteratively performs shape and speed optimization. The key feature of the algorithm is that both optimization problems can be solved via convex programming, making CES particularly fast. A range of numerical experiments show that the CES algorithm returns high-quality solutions in a matter of a few hundreds of milliseconds and hence appears amenable to a real-time implementation.
Cenozoic noncoaxial transtension along the western shoulder of the Ross Sea, Antarctica, and the emplacement of McMurdo dyke arrays
Field data on Cenozoic faults and the McMurdo dyke arrays in the Reeves Glacier–Mawson Glacier area, Victoria Land, Antarctica, allow us to support noncoaxial transtensional tectonics along the N–S-trending western shoulder of the Ross Sea. Dyke injection within a crustal-scale right-lateral strike-slip shear zone is testified by magma filled, tension gash-like arrangements within some master fault zones, and by the left-stepping arrangements of dykes in the intrafault zones. The noncoaxiality of deformation is shown by the re-activation of many dyke walls as right-lateral strike-slip faults. This suggests an increase in the strike-slip component over time along the western shoulder of the Ross Sea. Our data support the relevance of transtensional to strike-slip tectonics for triggering melting and controlling the geometry and modes of magma emplacement.
A scalable approach for data-driven taxi ride-sharing simulation
As urban population grows, cities face many challenges related to transportation, resource consumption, and the environment. Ride sharing has been proposed as an effective approach to reduce traffic congestion, gasoline consumption, and pollution. Despite great promise, researchers and policy makers lack adequate tools to assess tradeoffs and benefits of various ride-sharing strategies. Existing approaches either make unrealistic modeling assumptions or do not scale to the sizes of existing data sets. In this paper, we propose a real-time, data-driven simulation framework that supports the efficient analysis of taxi ride sharing. By modeling taxis and trips as distinct entities, our framework is able to simulate a rich set of realistic scenarios. At the same time, by providing a comprehensive set of parameters, we are able to study the taxi ride-sharing problem from different angles, considering different stakeholders' interests and constraints. To address the computational complexity of the model, we describe a new optimization algorithm that is linear in the number of trips and makes use of an efficient indexing scheme, which combined with parallelization, makes our approach scalable. We evaluate our framework and algorithm using real data - 360 million trips taken by 13,000 taxis in New York City during 2011 and 2012. The results demonstrate that our framework is effective and can provide insights into strategies for implementing city-wide ride-sharing solutions. We describe the findings of the study as well as a performance analysis of the model.
Processes of disengagement and engagement in assertive outreach patients: qualitative study.
BACKGROUND Assertive outreach has been established to care for'difficult to engage' patients, yet little is known about how patients experience their disengagement with mainstream services and later engagement with outreach teams. AIMS To explore the views of disengagement and engagement held by patients of assertive outreach teams. METHOD In-depth interviews were conducted with 40 purposefully selected patients and analysed using components of both thematic analysis and grounded theory. RESULTS Patients reported a desire to be independent, a poor therapeutic relationship and a loss of control due to medication effects as most important for disengagement. Time and commitment of staff, social support and engagement without a focus on medication, and a partnership model of the therapeutic relationship were most relevant for engagement. CONCLUSIONS The findings underline the importance of a comprehensive care model, committed staff with sufficient time, and a focus on relationship issues in dealing with 'difficult to engage' patients.
Organizational impact of system quality, information quality, and service quality
Increased organizational dependence on information systems drives management attention towards improving information systems’ quality. A recent survey shows that ‘‘Improve IT quality” is one of the top concerns facing IT executives. As IT quality is a multidimensional measure, it is important to determine what aspects of IT quality are critical to organizations to help Chief Information Officers (CIOs) to devise effective IT quality improvement strategies. In this research, we model the relationship between information systems’ (IS) quality and organizational impact. We hypothesize greater organizational impact in situations in which system quality, information quality and service quality are high. We also hypothesize a positive relationship between system quality and information quality. We test our hypotheses using survey data. Our structural equation model exhibits a good fit with the observed data. Our results show that IS service quality is the most influential variable in this model (followed by information quality and system quality), thus highlighting the importance of IS service quality for organizational performance. This paper contributes theoretically to IS success models through the system quality-to-information quality and IS quality-to-organizational impact links. Implications of our results for practice and research are discussed. 2010 Elsevier B.V. All rights reserved.
Design of Linear Equalizers Optimized for the Structural Similarity Index
We propose an algorithm for designing linear equalizers that maximize the structural similarity (SSIM) index between the reference and restored signals. The SSIM index has enjoyed considerable application in the evaluation of image processing algorithms. Algorithms, however, have not been designed yet to explicitly optimize for this measure. The design of such an algorithm is nontrivial due to the nonconvex nature of the distortion measure. In this paper, we reformulate the nonconvex problem as a quasi-convex optimization problem, which admits a tractable solution. We compute the optimal solution in near closed form, with complexity of the resulting algorithm comparable to complexity of the linear minimum mean squared error (MMSE) solution, independent of the number of filter taps. To demonstrate the usefulness of the proposed algorithm, it is applied to restore images that have been blurred and corrupted with additive white gaussian noise. As a special case, we consider blur-free image denoising. In each case, its performance is compared to a locally adaptive linear MSE-optimal filter. We show that the images denoised and restored using the SSIM-optimal filter have higher SSIM index, and superior perceptual quality than those restored using the MSE-optimal adaptive linear filter. Through these results, we demonstrate that a) designing image processing algorithms, and, in particular, denoising and restoration-type algorithms, can yield significant gains over existing (in particular, linear MMSE-based) algorithms by optimizing them for perceptual distortion measures, and b) these gains may be obtained without significant increase in the computational complexity of the algorithm.
Improved competitive learning neural networks for network intrusion and fraud detection
In this research, we propose two new clustering algorithms, the improved competitive learning network (ICLN) and the supervised improved competitive learning network (SICLN), for fraud detection and network intrusion detection. The ICLN is an unsupervised clustering algorithm, which applies new rules to the standard competitive learning neural network (SCLN). The network neurons in the ICLN are trained to represent the center of the data by a new reward-punishment update rule. This new update rule overcomes the instability of the SCLN. The SICLN is a supervised version of the ICLN. In the SICLN, the new supervised update rule uses the data labels to guide the training process to achieve a better clustering result. The SICLN can be applied to both labeled and unlabeled data and is highly tolerant to missing or delay labels. Furthermore, the SICLN is capable to reconstruct itself, thus is completely independent from the initial number of clusters. To assess the proposed algorithms, we have performed experimental comparisons on both research data and real-world data in fraud detection and network intrusion detection. The results demonstrate that both the ICLN and the SICLN achieve high performance, and the SICLN outperforms traditional unsupervised clustering
Molecular modeling of lipid drug formulations
Lipid formulations can improve the bioavailability of drugs that have low aqueous solubility. A variety of chemical compounds, including triglyceride oils (lipids), fatty acid esters and surfactants, can be included in lipid formulations. This heterogeneity makes spectroscopic study of the in ternal structure of formulation difficult. Understanding of lipid formulations at a molecular level will greatly improve our knowledge of in vivo dispersion and solubilisation patterns of lipid formulations. Molecular dynamics studies have provided useful insight into the structure and dynamics of different types of aggregates, including mixed glycerides with and without propylene glycol [1] and bile salts [2]. To date, such studies have not been performed on lipid drug formulations . The objective of this research is to develop a molecular dynamics protocol to examine the interaction between drugs and formulations at the atomic level. To evaluate and parameterize the force field of choice we are calculating Gibbs free energy of solvation of a number of alcohols and short poly -(ethylene glycol) polymers. Following this, the aggregation behaviour of 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine (POPC), sodium glycochenodeoxycholate (GDX), different digestion products and polyethylene glycol surfactants will be investigated. Moreover, the phase diagram s of three component systems composed of i) bile salts, digested products and water and ii) surfactants, lipids and water will be modelled. Simulations are being performed using the molecular dynamics software suite GROMACS. Calculations are being performed on a high performance computing cluster at the Victorian Life Sciences Com putation Initiative (VLSCI). The methods highlighted in this study will prove to be an essential tool for formulators of lipid systems for oral administration. Published: 1 May 2012
CAMPS: a constraint-based architecturefor multiagent planning and scheduling
A new integrated architecture for distributed planning and scheduling is proposed that exploits constraints for problem decomposition and coordination. The goal is to develop an ecient method to solve densely constrained planning/scheduling problems in a distributed manner without sacri®cing solution quality. A prototype system (CAMPS) was implemented, in which a set of intelligent agents try to coordinate their actions for `satisfying' planning/ scheduling results by handling several intraand inter-agent constraints. The repair-based methodology for distributed planning/scheduling is described, together with the constraintbased mechanism of dynamic coalition formation among agents. Ó 1998 Chapman & Hall
A Data mining Model for predicting the Coronary Heart Disease using Random Forest Classifier
Coronary Heart Disease (CHD) is a common form of disease affecting the heart and an important cause for premature death. From the point of view of medical sciences, data mining is involved in discovering various sorts of metabolic syndromes. Classification techniques in data mining play a significant role in prediction and data exploration. Classification technique such as Decision Trees has been used in predicting the accuracy and events related to CHD. In this paper, a Data mining model has been developed using Random Forest classifier to improve the prediction accuracy and to investigate various events related to CHD. This model can help the medical practitioners for predicting CHD with its various events and how it might be related with different segments of the population. The events investigated are Angina, Acute Myocardial Infarction (AMI), Percutaneous Coronary Intervention (PCI), and Coronary Artery Bypass Graft surgery (CABG). Experimental results have shown that classification using
Robust Stochastic Approximation Approach to Stochastic Programming
In this paper we consider optimization problems where the objective function is given in a form of the expectation. A basic difficulty of solving such stochastic optimization problems is that the involved multidimensional integrals (expectations) cannot be computed with high accuracy. The aim of this paper is to compare two computational approaches based on Monte Carlo sampling techniques, namely, the stochastic approximation (SA) and the sample average approximation (SAA) methods. Both approaches, the SA and SAA methods, have a long history. Current opinion is that the SAA method can efficiently use a specific (say, linear) structure of the considered problem, while the SA approach is a crude subgradient method, which often performs poorly in practice. We intend to demonstrate that a properly modified SA approach can be competitive and even significantly outperform the SAA method for a certain class of convex stochastic problems. We extend the analysis to the case of convex-concave stochastic saddle point problems and present (in our opinion highly encouraging) results of numerical experiments.
Relationship of contraction capacity to metabolic changes during recovery from a fatiguing contraction.
The relationship between changes in muscle metabolites and the contraction capacity was investigated in humans. Subjects (n = 13) contracted (knee extension) at a target force of 66% of the maximal voluntary contraction force (MVC) to fatigue, and the recovery in MVC and endurance (time to fatigue) were measured. Force recovered rapidly [half-time (t 1/2) less than 15 s] and after 2 min of recovery was not significantly different (P greater than 0.05) from the precontraction value. Endurance recovered more slowly (t 1/2 approximately 1.2 min) and was still significantly depressed after 2 and 4 min of recovery (P less than 0.05). In separate experiments (n = 10) muscle biopsy specimens were taken from the quadriceps femoris muscle before and after two successive contractions to fatigue at 66% of MVC with a recovery period of 2 or 4 min in between. The muscle content of high-energy phosphates and lactate was similar at fatigue after both contractions, whereas glucose 6-phosphate was lower after the second contraction (P less than 0.05). During recovery, muscle lactate decreased and was 74 and 43% of the value at fatigue after an elapsed period of 2 and 4 min, respectively. The decline in H+ due to lactate disappearance is balanced, however, by a release of H+ due to resynthesis of phosphocreatine, and after 2 min of recovery calculated muscle pH was found to remain at a low level similar to that at fatigue.(ABSTRACT TRUNCATED AT 250 WORDS)
Particle Swarm Optimization Trained Class Association Rule Mining: Application to Phishing Detection
Association and classification are two important tasks in data mining. Literature abounds with works that unify these two techniques. This paper presents a new algorithm called Particle Swarm Optimization trained Classification Association Rule Mining (PSOCARM) for associative classification that generates class association rules (CARs) from transactional database by formulating a combinatorial global optimization problem, without having to specify minimal support and confidence unlike other conventional associative classifiers. We devised a new rule pruning scheme in order to reduce the number of rules and increasing the generalization aspect of the classifier. We demonstrated its effectiveness for phishing email and phishing website detection. Our experimental results indicate the superiority of our proposed algorithm with respect to accuracy and the number of rules generated as compared to the state-of-the-art algorithms.
Body weight dynamics and their association with physical function and mortality in older adults: the Cardiovascular Health Study.
BACKGROUND To estimate the associations of weight dynamics with physical functioning and mortality in older adults. METHODS Longitudinal cohort study using prospectively collected data on weight, physical function, and health status in four U.S. Communities in the Cardiovascular Health Study. Included were 3,278 participants (2,013 women and 541 African Americans), aged 65 or older at enrollment, who had at least five weight measurements. Weight was measured at annual clinic visits between 1992 and 1999, and summary measures of mean weight, coefficient of variation, average annual weight change, and episodes of loss and gain (cycling) were calculated. Participants were followed from 1999 to 2006 for activities of daily living (ADL) difficulty, incident mobility limitations, and mortality. RESULTS Higher mean weight, weight variability, and weight cycling increased the risk of new onset of ADL difficulties and mobility limitations. After adjustment for risk factors, the hazard ratio (95% confidence interval) for weight cycling for incident ADL impairment was 1.28 (1.12, 1.47), similar to that for several comorbidities in our model, including cancer and diabetes. Lower weight, weight loss, higher variability, and weight cycling were all risk factors for mortality, after adjustment for demographic risk factors, height, self-report health status, and comorbidities. CONCLUSIONS Variations in weight are important indicators of future physical limitations and mortality in the elderly and may reflect difficulties in maintaining homeostasis throughout older ages. Monitoring the weight of an older person for fluctuations or episodes of both loss and gain is an important aspect of geriatric care.
Impact of therapeutic strategies on the prognosis of candidemia in the ICU.
OBJECTIVES To determine the epidemiology of Candida bloodstream infections, variables influencing mortality, and antifungal resistance rates in ICUs in Spain. DESIGN Prospective, observational, multicenter population-based study. SETTING Medical and surgical ICUs in 29 hospitals distributed throughout five metropolitan areas of Spain. PATIENTS Adult patients (≥ 18 yr) with an episode of Candida bloodstream infection during admission to any surveillance area ICU from May 2010 to April 2011. INTERVENTIONS Candida isolates were sent to a reference laboratory for species identification by DNA sequencing and susceptibility testing using the methods and breakpoint criteria promulgated by the European Committee on Antimicrobial Susceptibility Testing. Prognostic factors associated with early (0-7 d) and late (8-30 d) mortality were analyzed using logistic regression modeling. MEASUREMENTS AND MAIN RESULTS We detected 773 cases of candidemia, 752 of which were included in the overall cohort. Among these, 168 (22.3%) occurred in adult ICU patients. The rank order of Candida isolates was as follows: Candida albicans (52%), Candida parapsilosis (23.7%), Candida glabrata (12.7%), Candida tropicalis (5.8%), Candida krusei (4%), and others (1.8%). Overall susceptibility to fluconazole was 79.2%. Cumulative mortality at 7 and 30 days after the first episode of candidemia was 16.5% and 47%, respectively. Multivariate analysis showed that early appropriate antifungal treatment and catheter removal (odds ratio, 0.27; 95% CI, 0.08-0.91), Acute Physiology and Chronic Health Evaluation II score (odds ratio, 1.11; 95% CI, 1.04-1.19), and abdominal source (odds ratio, 8.15; 95% CI, 1.75-37.93) were independently associated with early mortality. Determinants of late mortality were age (odds ratio, 1.04; 95% CI, 1.01-1.07), intubation (odds ratio, 7.24; 95% CI, 2.24-23.40), renal replacement therapy (odds ratio, 6.12; 95% CI, 2.24-16.73), and primary source (odds ratio, 2.51; 95% CI, 1.06-5.95). CONCLUSIONS Candidemia in ICU patients is caused by non-albicans species in 48% of cases, C. parapsilosis being the most common among these. Overall mortality remains high and mainly related with host factors. Prompt adequate antifungal treatment and catheter removal could be critical to decrease early mortality.
A Clandestine Notebook (1678-1679) on Spinoza, Beverland, Politics, the Bible and Sex
In the years 1678-1679 an Utrecht freethinker scribbled daring remarks in an unsightly jotter. His interests included sex, politics, religion and philosophy. It takes only a quick glance to see that he felt drawn towards all things radical - Spinoza figures prominently in his notebook, but even more so Adrianus Beverland, a notorious libertine known for his eroticism. Our author - as yet unidentified - was well-informed about political affairs, both local, national and international. He appears to have been connected with the well-to-do and the well-educated in Utrecht and beyond. His jottings broached any subject, as long as it was novel, exciting or juicy. Much seems to be taken down spontaneously, as gossip or alehouse bravado. The notebook, now kept in Utrecht University Library (ms. 1284) and published here for the first time, offers a rare insight into the uncensored fascinations of a member of the Dutch elite in a period in which society and ideas underwent drastic change.*
The Information Bus - An Architecture for Extensible Distributed Systems
Research can rarely be performed on large-scale, distributed systems at the level of thousands of workstations. In this paper, we describe the motivating constraints, design principles, and architecture for an extensible, distributed system operating in such an environment. The constraints include continuous operation, dynamic system evolution, and integration with extant systems. The Information Bus, our solution, is a novel synthesis of four design principles: core communication protocols have minimal semantics, objects are self-describing, types can be dynamically defined, and communication is anonymous. The current implementation provides both flexibility and high performance, and has been proven in several commercial environments, including integrated circuit fabrication plants and brokerage/trading floors.
Corporate residence fraud detection
With the globalisation of the world's economies and ever-evolving financial structures, fraud has become one of the main dissipaters of government wealth and perhaps even a major contributor in the slowing down of economies in general. Although corporate residence fraud is known to be a major factor, data availability and high sensitivity have caused this domain to be largely untouched by academia. The current Belgian government has pledged to tackle this issue at large by using a variety of in-house approaches and cooperations with institutions such as academia, the ultimate goal being a fair and efficient taxation system. This is the first data mining application specifically aimed at finding corporate residence fraud, where we show the predictive value of using both structured and fine-grained invoicing data. We further describe the problems involved in building such a fraud detection system, which are mainly data-related (e.g. data asymmetry, quality, volume, variety and velocity) and deployment-related (e.g. the need for explanations of the predictions made).
Purely Sequence-Trained Neural Networks for ASR Based on Lattice-Free MMI
In this paper we describe a method to perform sequencediscriminative training of neural network acoustic models without the need for frame-level cross-entropy pre-training. We use the lattice-free version of the maximum mutual information (MMI) criterion: LF-MMI. To make its computation feasible we use a phone n-gram language model, in place of the word language model. To further reduce its space and time complexity we compute the objective function using neural network outputs at one third the standard frame rate. These changes enable us to perform the computation for the forward-backward algorithm on GPUs. Further the reduced output frame-rate also provides a significant speed-up during decoding. We present results on 5 different LVCSR tasks with training data ranging from 100 to 2100 hours. Models trained with LFMMI provide a relative word error rate reduction of ∼11.5%, over those trained with cross-entropy objective function, and ∼8%, over those trained with cross-entropy and sMBR objective functions. A further reduction of ∼2.5%, relative, can be obtained by fine tuning these models with the word-lattice based sMBR objective function.
Additive transcriptomic variation associated with reproductive traits suggest local adaptation in a recently settled population of the Pacific oyster, Crassostrea gigas
BackgroundOriginating from Northeast Asia, the Pacific oyster Crassostrea gigas has been introduced into a large number of countries for aquaculture purpose. Following introduction, the Pacific oyster has turned into an invasive species in an increasing number of coastal areas, notably recently in Northern Europe.MethodsTo explore potential adaptation of reproductive traits in populations with different histories, we set up a common garden experiment based on the comparison of progenies from two populations of Pacific oyster sampled in France and Denmark and their hybrids. Sex ratio, condition index and microarray gene expression in gonads, were analyzed in each progeny (n = 60).ResultsA female-biased sex-ratio and a higher condition index were observed in the Danish progeny, possibly reflecting an evolutionary reproductive strategy to increase the potential success of natural recruitment in recently settled population. Using multifarious statistical approaches and accounting for sex differences we identified several transcripts differentially expressed between the Danish and French progenies, for which additive genetic basis is suspected (showing intermediate expression levels in hybrids, and therefore additivity). Candidate transcripts included mRNA coding for sperm quality and insulin metabolism, known to be implicated in coordinated control and success of reproduction.ConclusionsObserved differences suggest that adaptation of invasive populations might have occurred during expansion acting on reproductive traits, and in particular on a female-biased sex-ratio, gamete quality and fertility.
Trends and Challenges in CMOS Design for Emerging 60 GHz WPAN Applications
The extensive growth of wireless communications industry is creating a big market opportunity. Wireless operators are currently searching for new solutions which would be implemented into the existing wireless communication networks to provide the broader bandwidth, the better quality and new value-added services. In the last decade, most commercial efforts were focused on the 1-10 GHz spectrum for voice and data applications for mobile phones and portable computers (Niknejad & Hashemi, 2008). Nowadays, the interest is growing in applications that use high rate wireless communications. Multigigabit-per-second communication requires a very large bandwidth. The Ultra-Wide Band (UWB) technology was basically used for this issue. However, this technology has some shortcomings including problems with interference and a limited data rate. Furthermore, the 3–5 GHz spectrum is relatively crowded with many interferers appearing in the WiFi bands (Niknejad & Hashemi, 2008). The use of millimeter wave frequency band is considered the most promising technology for broadband wireless. In 2001, the Federal Communications Commission (FCC) released a set of rules governing the use of spectrum between 57 and 66 GHz (Baldwin, 2007). Hence, a large bandwidth coupled with high allowable transmit power equals high possible data rates. Traditionally the implementation of 60 GHz radio technology required expensive technologies based on III-V compound semiconductors such as InP and GaAs (Smulders et al., 2007). The rapid progress of CMOS technology has enabled its application in millimeter wave applications. Currently, the transistors became small enough, consequently fast enough. As a result, the CMOS technology has become one of the most attractive choices in implementing 60 GHz radio due to its low cost and high level of integration (Doan et al., 2005). Despite the advantages of CMOS technology, the design of 60 GHz CMOS transceiver exhibits several challenges and difficulties that the designers must overcome. This chapter aims to explore the potential of the 60 GHz band in the use for emergent generation multi-gigabit wireless applications. The chapter presents a quick overview of the state-of-the-art of 60 GHz radio technology and its potentials to provide for high data rate and short range wireless communications. The chapter is organized as follows. Section 2 presents an overview about 60 GHz band. The advantages are presented to highlight the performance characteristics of this band. The opportunities of the physical layer of the IEEE
SUN1 Regulates HIV-1 Nuclear Import in a Manner Dependent on the Interaction between the Viral Capsid and Cellular Cyclophilin A.
Human immunodeficiency virus type 1 (HIV-1) can infect nondividing cells via passing through the nuclear pore complex. The nuclear membrane-imbedded protein SUN2 was recently reported to be involved in the nuclear import of HIV-1. Whether SUN1, which shares many functional similarities with SUN2, is involved in this process remained to be explored. Here we report that overexpression of SUN1 specifically inhibited infection by HIV-1 but not that by simian immunodeficiency virus (SIV) or murine leukemia virus (MLV). Overexpression of SUN1 did not affect reverse transcription but led to reduced accumulation of the 2-long-terminal-repeat (2-LTR) circular DNA and integrated viral DNA, suggesting a block in the process of nuclear import. HIV-1 CA was mapped as a determinant for viral sensitivity to SUN1. Treatment of SUN1-expressing cells with cyclosporine (CsA) significantly reduced the sensitivity of the virus to SUN1, and an HIV-1 mutant containing CA-G89A, which does not interact with cyclophilin A (CypA), was resistant to SUN1 overexpression. Downregulation of endogenous SUN1 inhibited the nuclear entry of the wild-type virus but not that of the G89A mutant. These results indicate that SUN1 participates in the HIV-1 nuclear entry process in a manner dependent on the interaction of CA with CypA.IMPORTANCE HIV-1 infects both dividing and nondividing cells. The viral preintegration complex (PIC) can enter the nucleus through the nuclear pore complex. It has been well known that the viral protein CA plays an important role in determining the pathways by which the PIC enters the nucleus. In addition, the interaction between CA and the cellular protein CypA has been reported to be important in the selection of nuclear entry pathways, though the underlying mechanisms are not very clear. Here we show that both SUN1 overexpression and downregulation inhibited HIV-1 nuclear entry. CA played an important role in determining the sensitivity of the virus to SUN1: the regulatory activity of SUN1 toward HIV-1 relied on the interaction between CA and CypA. These results help to explain how SUN1 is involved in the HIV-1 nuclear entry process.
The 2 nd Workshop on Arabic Corpora and Processing Tools 2016 Theme :
Speech datasets and corpora are crucial for both developing and evaluating accurate Natural Language Processing systems. While Modern Standard Arabic has received more attention, dialects are drastically underestimated, even they are the most used in our daily life and the social media, recently. In this paper, we present the methodology of building an Arabic Speech Corpus for Algerian dialects, and the preliminary version of that dataset of dialectal arabic speeches uttered by Algerian native speakers selected from different Algeria’s departments. In fact, by means of a direct recording way, we have taken into acount numerous aspects that foster the richness of the corpus and that provide a representation of phonetic, prosodic and orthographic varieties of Algerian dialects. Among these considerations, we have designed a rich speech topics and content. The annotations provided are some useful information related to the speakers, time-aligned orthographic word transcription. Many potential uses can be considered such as speaker/dialect identification and computational linguistic for Algerian sub-dialects. In its preliminary version, our corpus encompasses 17 sub-dialects with 109 speakers and more than 6 K utterances.
Glenoid morphology affects the incidence of radiolucent lines around cemented pegged polyethylene glenoid components
Radiolucent lines (RLL) are frequent findings around cemented all-polyethylene glenoid implants. The present study evaluates the frequency, extend and the clinical impact of RLL around a cemented two-pegged glenoid implant with special focus on the influence of preoperative glenoid morphology. Our hypothesis was that glenoid morphology does not affect clinical outcome and RLL in the investigated setting. Between 2003 and 2008, a total of 113 cases of total shoulder arthroplasties (Affinis, Mathys Ltd Bettlach, Switzerland) were performed in three surgical centres using a pegged cemented polyethylene glenoid component. A total of 90 cases could be evaluated clinically and radiographically. Clinical outcome was analysed using the constant score (CS) and range of motion assessment. Radiographic evaluation was performed in true anterior–posterior and axial views with special focus on loosening and RLL. Further, preoperative glenoid morphology was documented and its correlation to radiolucent lines and clinical outcomes was evaluated. At a mean of 58.8 (range 31.2–92.5)-month follow-up the CS improved from 21.5 points preoperatively to 62.3 points postoperatively. Radiolucent lines were found in 76.6 % of cases. If present, RLL were located at the backside of the implant (74.4 %) in the majority of the cases not around the pegs (10 %). There was no significant correlation between RLL and clinical outcome or follow-up time. The amount and extend of RLL were correlated to glenoid morphology with significantly higher values for glenoid types B2 and C according to Walch in comparison to glenoid types A1, A2 and B1. RLL did not affect clinical outcome and did not correlate with the follow-up time. Patients with glenoid morphology types B2 and C showed significantly worse radiographic results. Level IV case series study.
Network-Aware Operator Placement for Stream-Processing Systems
To use their pool of resources efficiently, distributed stream-processing systems push query operators to nodes within the network. Currently, these operators, ranging from simple filters to custom business logic, are placed manually at intermediate nodes along the transmission path to meet application-specific performance goals. Determining placement locations is challenging because network and node conditions change over time and because streams may interact with each other, opening venues for reuse and repositioning of operators. This paper describes a stream-based overlay network (SBON), a layer between a stream-processing system and the physical network that manages operator placement for stream-processing systems. Our design is based on a cost space, an abstract representation of the network and on-going streams, which permits decentralized, large-scale multi-query optimization decisions. We present an evaluation of the SBON approach through simulation, experiments on PlanetLab, and an integration with Borealis, an existing stream-processing engine. Our results show that an SBON consistently improves network utilization, provides low stream latency, and enables dynamic optimization at low engineering cost.
Nonce Generation For The Digital Signature Standard
Digital Signature Algorithm (DSA) is an underlying algorithm to form a signature in the Digital Signature Standard (DSS). DSA uses a new random number (or nonce) each time a signature is generated for a message. In this paper, we present a Linear Congruential Generator (LCG) based approach to generate nonce for DSS. LCG has been shown to be insecure for nonce generation. If two message-signature pairs are known along with the parameters of the LCG used to generate the nonce then the private key in the signature scheme can be found, with high probability, by solving three congruences over different moduli. We use a comparison of the output of two LCGs to generate the nonces and show that our approach is secure. We also show that coupled multiple recursive generators which are similar to LCGs are also safe for nonce generation. Congruences can no longer be set up to solve for the private key. The advantage of LCG based schemes for pseudo-random number generation is their efficiency.
Electromagnetic simulation of 3D stacked ICs: Full model vs. S-parameter cascaded based model
Three-dimensional electromagnetic simulation models are often simplified and/or segmented in order to reduce the simulation time and memory requirements without sacrificing the accuracy of the results. This paper investigates the difference between full model and S-parameter cascaded based model of 3D stacked ICs with the presence of Through Silicon Vias. It is found that the simulation of the full model is required for accurate results, however, a divide and conquers (segmentation) approach can be used for preliminary post layout analysis. Modeling guidelines are discussed and details on the proper choice of ports, boundary conditions, and solver technology are highlighted. A de-embedding methodology is finally explored to improve the accuracy of the cascaded/segmented results.
Fast Image Processing with Fully-Convolutional Networks
We present an approach to accelerating a wide variety of image processing operators. Our approach uses a fully-convolutional network that is trained on input-output pairs that demonstrate the operator’s action. After training, the original operator need not be run at all. The trained network operates at full resolution and runs in constant time. We investigate the effect of network architecture on approximation accuracy, runtime, and memory footprint, and identify a specific architecture that balances these considerations. We evaluate the presented approach on ten advanced image processing operators, including multiple variational models, multiscale tone and detail manipulation, photographic style transfer, nonlocal dehazing, and nonphoto- realistic stylization. All operators are approximated by the same model. Experiments demonstrate that the presented approach is significantly more accurate than prior approximation schemes. It increases approximation accuracy as measured by PSNR across the evaluated operators by 8.5 dB on the MIT-Adobe dataset (from 27.5 to 36 dB) and reduces DSSIM by a multiplicative factor of 3 com- pared to the most accurate prior approximation scheme, while being the fastest. We show that our models general- ize across datasets and across resolutions, and investigate a number of extensions of the presented approach.
Fatal asphyxial episodes in the very young: classification and diagnostic issues.
Infants and young children are exposed to a relatively limited range of circumstances that may result in accidental or inflicted asphyxial deaths. These usually involve situations that interfere with oxygen uptake by the blood, or that decrease the amount of circulating oxygen. Typically infants and toddlers asphyxiate in sleeping accidents where they smother when their external airways are covered, hang when clothing is caught on projections inside cots, or wedge when they slip between mattresses and walls. Overlaying may cause asphyxiation due to a combination of airway occlusion and mechanical asphyxia, as may inflicted asphyxia with a pillow. The diagnosis of asphyxiation in infancy is difficult as there are usually no positive findings at autopsy and so differentiating asphyxiation from sudden infant death syndrome (SIDS) based purely on the pathological features will usually not be possible. Similarly, the autopsy findings in inflicted and accidental suffocation will often be identical. Classifications of asphyxia are sometimes confusing as particular types of asphyxiating events may involve several processes and so it may not be possible to precisely compartmentalize a specific incident. For this reason asphyxial events have been classified as being due to: insufficient oxygen availability in the surrounding environment, critical reduction of oxygen transfer from the atmosphere to the blood, impairment of oxygen transport in the circulating blood, or compromise of cellular oxygen uptake. The range of possible findings at the death scene and autopsy are reviewed, and the likelihood of finding markers/indicators of asphyxia is discussed. The conclusion that asphyxiation has occurred often has to be made by integrating aspects of the history, scene, and autopsy, while recognizing that none of these are necessarily pathognomonic, and also by excluding other possibilities. However, even after full investigation a diagnosis of asphyxia may not be possible and a number of issues concerning possible lethal terminal mechanisms may remain unresolved.
Discovering Different Types of Topics: Factored Topic Models
In traditional topic models such as LDA, a word is generated by choosing a topic from a collection. However, existing topic models do not identify different types of topics in a document, such as topics that represent the content and topics that represent the sentiment. In this paper, our goal is to discover such different types of topics, if they exist. We represent our model as several parallel topic models (called topic factors), where each word is generated from topics from these factors jointly. Since the latent membership of the word is now a vector, the learning algorithms become challenging. We show that using a variational approximation still allows us to keep the algorithm tractable. Our experiments over several datasets show that our approach consistently outperforms many classic topic models while also discovering fewer, more meaningful, topics. 1
Evolving Gomoku solver by genetic algorithm
Gomoku, also known as Gobang or five-in-a-row, is a popular two-player strategical board game. Given a squared 15×15 board, two players compete to first obtain an unbroken row of five pieces horizontally, vertically or diagonally. Classic methods for solving such games are based on game-tree theory, for example the minimax tree. These methods have a clear disadvantage: the depth of search becomes a bottleneck all the time. In this paper we propose a genetic algorithm for solving the Gomoku game. We investigated the general framework for applying genetic algorithm to strategical games and designed the fitness function from various game-related aspects. Empirical experimental results showed that the proposed genetic solver can search in greater depth than traditional game-tree-based solvers, resulting in better and more enjoyable solutions, and does so more efficiently.
Profanity in media associated with attitudes and behavior regarding profanity use and aggression.
OBJECTIVE We hypothesized that exposure to profanity in media would be directly related to beliefs and behavior regarding profanity and indirectly to aggressive behavior. METHODS We examined these associations among 223 adolescents attending a large Midwestern middle school. Participants completed a number of questionnaires examining their exposure to media, attitudes and behavior regarding profanity, and aggressive behavior. RESULTS Results revealed a positive association between exposure to profanity in multiple forms of media and beliefs about profanity, profanity use, and engagement in physical and relational aggression. Specifically, attitudes toward profanity use mediated the relationship between exposure to profanity in media and subsequent behavior involving profanity use and aggression. CONCLUSIONS The main hypothesis was confirmed, and implications for the rating industry and research field are discussed.
The impact of hospital mergers on treatment intensity and health outcomes.
OBJECTIVE To analyze the impact of hospital mergers on treatment intensity and health outcomes. DATA Hospital inpatient data from California for 1990 through 2006, encompassing 40 mergers. STUDY DESIGN I used a geographic-based IV approach to determine the effect of a zip code's exposure to a merger. The merged facility's market share represents exposure, instrumented with combined premerge shares. Additional specifications include Herfindahl Index (HHI), instrumented with predicted change in HHI. RESULTS The primary specification results indicate that merger completion is associated with a 3.7 percent increase in the utilization of bypass surgery and angioplasty and a 1.7 percent increase in inpatient mortality above averages in 2000 for the average zip code. Isolating the competition mechanism mutes the treatment intensity result slightly, but it more than doubles the merger exposure effect on inpatient mortality to a 3.9 percent increase. The competition mechanism is associated with a sizeable increase in number of procedures. CONCLUSIONS Unlike previous studies, this analysis finds that hospital mergers are associated with increased treatment intensity and higher inpatient mortality rates among heart disease patients. Access to additional outcome measures such as 30-day mortality and readmission rates might shed additional light on whether the relationship between these outcomes is causal.
The Making of Modern Social Psychology: The Hidden Story of How an International Social Science was Created
List of Figures and Tables. Preface. Acknowledgements. List of Abbreviations. I: The Quest for a Social Psychology of Human Beings. 1. The Birth of a New Science. 2. Two Sources of Modern Social Psychology. II. The West European Experiment. 3. The West European Experiment. 3. Americans and Europeans. 4. The Transnational Committee: from New York to Rome. 5. The European Map of Social Psychology in the Mid-1960s. 6. The Second Milestone for European Social Psychology. 7. The Louvain Summer School. 8. The Ford Foundation and Fundraising for Europe. III. The east European Experiment. 9. The First Encounter of a Small Science with Big History. 10. A Strange Animal. IV. The Latin American Experiment. 11. Latin American Odyssey. 12. A Second Encounter with History. 13. An 'Invisible College.'. V. Crossing the Atlantic. 14. A Crisis Delayed. 15. Crossing the Atlantic. 16. Pilgrims' Progress. 17. Rays and Shadows above the Transnational Committee. Appendix. Notes. References. Index.
A Simple, Low-Cost Conductive Composite Material for 3D Printing of Electronic Sensors
3D printing technology can produce complex objects directly from computer aided digital designs. The technology has traditionally been used by large companies to produce fit and form concept prototypes ('rapid prototyping') before production. In recent years however there has been a move to adopt the technology as full-scale manufacturing solution. The advent of low-cost, desktop 3D printers such as the RepRap and Fab@Home has meant a wider user base are now able to have access to desktop manufacturing platforms enabling them to produce highly customised products for personal use and sale. This uptake in usage has been coupled with a demand for printing technology and materials able to print functional elements such as electronic sensors. Here we present formulation of a simple conductive thermoplastic composite we term 'carbomorph' and demonstrate how it can be used in an unmodified low-cost 3D printer to print electronic sensors able to sense mechanical flexing and capacitance changes. We show how this capability can be used to produce custom sensing devices and user interface devices along with printed objects with embedded sensing capability. This advance in low-cost 3D printing with offer a new paradigm in the 3D printing field with printed sensors and electronics embedded inside 3D printed objects in a single build process without requiring complex or expensive materials incorporating additives such as carbon nanotubes.
ASER: A Large-scale Eventuality Knowledge Graph
Understanding human’s language requires complex world knowledge. However, existing large-scale knowledge graphs mainly focus on knowledge about entities while ignoring knowledge about activities, states, or events, which are used to describe how entities or things act in the real world. To fill this gap, we develop ASER (activities, states, events, and their relations), a large-scale eventuality knowledge graph extracted from more than 11-billion-token unstructured textual data. ASER contains 15 relation types belonging to five categories, 194-million unique eventualities, and 64-million unique edges among them. Both human and extrinsic evaluations demonstrate the quality and effectiveness of ASER.
Pharmacotherapy for self-injurious behavior: Preliminary tests of the D1 hypothesis
1. The D1 dopamine hypersensitivity model of self-injurious behavior leads to a testable clinical hypothesis: that the mixed D1/D2 dopamine antagonist fluphenazine may improve the symptoms of self-injurious patients. 2. The hypothesis was tested in an open pilot trial in six patients and a partially controlled trial in nine patients. 3. Some degree of clinical improvement was observed in eleven of the fifteen. 4. The trials represent a partial affirmation of the D1 hypothesis. However, it is also clear that conventional methodology for psychopharmacologic research is inappropriate for the proper clinical evaluation of self-injurious patients. The proper method should include the following elements: i) An epidemiologically representative sample ii) A naturalistic study environment iii) A longitudinal design with long-term follow-up iv) Concurrent behavioral ratings using direct observations and a reliable, treatment-sensitive rating scale. 5. Before subjects enter a clinical trial of an experimental medication, a neuropsychiatric differential diagnosis should be applied to limit the diversity of the sample.
Silver nanowire transparent electrodes for liquid crystal-based smart windows
A significant manufacturing cost of polymer-dispersed liquid crystal (PDLC) smart windows results from the use of indium tin oxide (ITO) as the transparent electrode. In this work, films of silver nanowires are proposed as an alternative electrode and are integrated into PDLC smart windows. Both the materials and fabrication costs of the nanowire electrodes are significantly less than that of ITO electrodes. Additionally, nanowire electrodes are shown to exhibit superior electro-optical characteristics. The transmittance of a nanowire electrode-based PDLC smart window is both higher in the on-state and lower in the off-state compared to an equivalent device fabricated using ITO. Furthermore, it is found that a lower external field strength (voltage) is required to actuate the nanowire-based smart window. & 2014 Elsevier B.V. All rights reserved.
SCNN: An accelerator for compressed-sparse convolutional neural networks
Convolutional Neural Networks (CNNs) have emerged as a fundamental technology for machine learning. High performance and extreme energy efficiency are critical for deployments of CNNs, especially in mobile platforms such as autonomous vehicles, cameras, and electronic personal assistants. This paper introduces the Sparse CNN (SCNN) accelerator architecture, which improves performance and energy efficiency by exploiting the zero-valued weights that stem from network pruning during training and zero-valued activations that arise from the common ReLU operator. Specifically, SCNN employs a novel dataflow that enables maintaining the sparse weights and activations in a compressed encoding, which eliminates unnecessary data transfers and reduces storage requirements. Furthermore, the SCNN dataflow facilitates efficient delivery of those weights and activations to a multiplier array, where they are extensively reused; product accumulation is performed in a novel accumulator array. On contemporary neural networks, SCNN can improve both performance and energy by a factor of 2.7x and 2.3x, respectively, over a comparably provisioned dense CNN accelerator.
Theoretical Linear Convergence of Unfolded ISTA and Its Practical Weights and Thresholds
In recent years, unfolding iterative algorithms as neural networks has become an empirical success in solving sparse recovery problems. However, its theoretical understanding is still immature, which prevents us from fully utilizing the power of neural networks. In this work, we study unfolded ISTA (Iterative Shrinkage Thresholding Algorithm) for sparse signal recovery. We introduce a weight structure that is necessary for asymptotic convergence to the true sparse signal. With this structure, unfolded ISTA can attain a linear convergence, which is better than the sublinear convergence of ISTA/FISTA in general cases. Furthermore, we propose to incorporate thresholding in the network to perform support selection, which is easy to implement and able to boost the convergence rate both theoretically and empirically. Extensive simulations, including sparse vector recovery and a compressive sensing experiment on real image data, corroborate our theoretical results and demonstrate their practical usefulness. We have made our codes publicly available.2.
Dynamics of barred galaxies
Some 30% of disc galaxies have a pronounced central bar feature in the disc plane and many more have weaker features of a similar kind. Kinematic data indicate that the bar constitutes a major non-axisymmetric component of the mass distribution and that the bar pattern tumbles rapidly about the axis normal to the disc plane. The observed motions are consistent with material within the bar streaming along highly elongated orbits aligned with the rotating major axis. A barred galaxy may also contain a spheroidal bulge at its centre, spirals in the outer disc and, less commonly, other features such as a ring or lens. Mild asymmetries in both the light and kinematics are quite common. We review the main problems presented by these complicated dynamical systems and summarize the effort so far made towards their solution, emphasizing results which appear secure. Bars are probably formed through a global dynamical instability of a rotationally supported galactic disc. Studies of the orbital structure seem to indicate that most stars in the bar follow regular orbits but that a small fraction may be stochastic. Theoretical work on the three-dimensional structure of bars is in its infancy, but first results suggest that bars should be thicker in the third dimension than the disc from which they formed. Gas flow patterns within bars seem to be reasonably well understood, as are the conditions under which straight offset dust lanes are formed. However, no observation so far supports the widely held idea that the spiral arms are the driven response to the bar, while evidence accumulates that the spiral patterns are distinct dynamical features having a different pattern speed. Both the gaseous and stellar distributions are expected to evolve on a time-scale of many bar rotation periods. Submitted: July 1992, accepted November 1992, appeared February 1993 ii Sellwood and Wilkinson
Revisiting Simple Neural Networks for Learning Representations of Knowledge Graphs
We address the problem of learning vector representations for entities and relations in Knowledge Graphs (KGs) for Knowledge Base Completion (KBC). This problem has received significant attention in the past few years and multiple methods have been proposed. Most of the existing methods in the literature use a predefined characteristic scoring function for evaluating the correctness of KG triples. These scoring functions distinguish correct triples (high score) from incorrect ones (low score). However, their performance vary across different datasets. In this work, we demonstrate that a simple neural network based score function can consistently achieve near start-of-the-art performance on multiple datasets. We also quantitatively demonstrate biases in standard benchmark datasets, and highlight the need to perform evaluation spanning various datasets.
Co-PACRR: A Context-Aware Neural IR Model for Ad-hoc Retrieval
Neural IR models, such as DRMM and PACRR, have achieved strong results by successfully capturing relevance matching signals. We argue that the context of these matching signals is also important. Intuitively, when extracting, modeling, and combining matching signals, one would like to consider the surrounding text(local context) as well as other signals from the same document that can contribute to the overall relevance score. In this work, we highlight three potential shortcomings caused by not considering context information and propose three neural ingredients to address them: a disambiguation component, cascade k-max pooling, and a shuffling combination layer. Incorporating these components into the PACRR model yields Co-PACER, a novel context-aware neural IR model. Extensive comparisons with established models on TREC Web Track data confirm that the proposed model can achieve superior search results. In addition, an ablation analysis is conducted to gain insights into the impact of and interactions between different components. We release our code to enable future comparisons.
Poster Abstract: Resource Aware Placement of Data Stream Analytics Operators on Fog Infrastructure for Internet of Things Applications
While cloud computing led the path towards a revolutionary change in the modern day computing aspects, further developments gave way to the Internet of Things and its own range of highly interactive applications. While such a paradigm is more distributed in reach, it also brings forth its own set of challenges in the form of latency sensitive applications, where a quick response highly contributes to efficient usage and QoS (Quality of Service). Fog computing, which is the answer to all such challenges, is rapidly changing the distributed computing landscape by extending the cloud computing paradigm to include widespread resources located at the network edge. While the fog paradigm makes use of edge-ward devices capable of computing, networking and storage, one of the key impending challenges is to determine where to place the data analytic operators for maximum efficiency and least costs for the network and its traffic, the efficient algorithmic solution to which we seek to propose by way of this work underway.
A next-best-view system for autonomous 3-D object reconstruction
The focus of this paper is to design and implement a system capable of automatically reconstructing a prototype three-dimensional model from a minimum number of range images of an object. Given an ideal 3-D object model, the system iteratively renders range and intensity images of the model from a speci ed position, assimilates the range information into a prototype model, and determines the sensor pose (position and orientation) from which an optimal amount of previously unrecorded information may be acquired. Reconstruction is terminated when the model meets a given threshold of accuracy. Such a system has applications in the context of robot navigation, manufacturing, or hazardous materials handling. The system has been tested successfully on several synthetic data models, and each set of results was found to be reasonably consistent with an intuitive human search. The number of views necessary to reconstruct an adequate 3-D prototype depends on the complexity of the object or scene and the initial data collected. The prototype models which the system recovers compare well with the ideal models.
LSTM vs. BM25 for Open-domain QA: A Hands-on Comparison of Effectiveness and Efficiency
Recent advances in neural networks, along with the growth of rich and diverse community question answering (cQA) data, have enabled researchers to construct robust open-domain question answering (QA) systems. It is often claimed that such state-of-the-art QA systems far outperform traditional IR baselines such as BM25. However, most such studies rely on relatively small data sets, e.g., those extracted from the old TREC QA tracks. Given massive training data plus a separate corpus of Q&A pairs as the target knowledge source, how well would such a system really perform? How fast would it respond? In this demonstration, we provide the attendees of SIGIR 2017 an opportunity to experience a live comparison of two open-domain QA systems, one based on a long short-term memory (LSTM) architecture with over 11 million Yahoo! Chiebukuro (i.e., Japanese Yahoo! Answers) questions and over 27.4 million answers for training, and the other based on BM25. Both systems use the same Q&A knowledge source for answer retrieval. Our core demonstration system is a pair of Japanese monolingual QA systems, but we leverage machine translation for letting the SIGIR attendees enter English questions and compare the Japanese responses from the two systems after translating them into English.
Enhanced antibacterial properties, biocompatibility, and corrosion resistance of degradable Mg-Nd-Zn-Zr alloy.
Magnesium (Mg), a potential biodegradable material, has recently received increasing attention due to its unique antibacterial property. However, rapid corrosion in the physiological environment and potential toxicity limit clinical applications. In order to improve the corrosion resistance meanwhile not compromise the antibacterial activity, a novel Mg alloy, Mg-Nd-Zn-Zr (Hereafter, denoted as JDBM), is fabricated by alloying with neodymium (Nd), zinc (Zn), zirconium (Zr). pH value, Mg ion concentration, corrosion rate and electrochemical test show that the corrosion resistance of JDBM is enhanced. A systematic investigation of the in vitro and in vivo antibacterial capability of JDBM is performed. The results of microbiological counting, CLSM, SEM in vitro, and microbiological cultures, histopathology in vivo consistently show JDBM enhanced the antibacterial activity. In addition, the significantly improved cytocompatibility is observed from JDBM. The results suggest that JDBM effectively enhances the corrosion resistance, biocompatibility and antimicrobial properties of Mg by alloying with the proper amount of Zn, Zr and Nd.
SOME BEAUTIFUL EQUATIONS OF MATHEMATICAL PHYSICS
The basic ideas and the important role of gauge principles in modern elementary particle physics are outlined. There are three theoretically consistent gauge principles in quantum field theory: the spin-1 gauge principle of electromagnetism and the standard model, the spin-2 gauge principle of general relativity, and the spin-3/2 gauge principle of supergravity.
(Meta-) Evaluation of Machine Translation
This paper evaluates the translation quality of machine translation systems for 8 language pairs: translating French, German, Spanish, and Czech to English and back. We carried out an extensive human evaluation which allowed us not only to rank the different MT systems, but also to perform higher-level analysis of the evaluation process. We measured timing and intraand inter-annotator agreement for three types of subjective evaluation. We measured the correlation of automatic evaluation metrics with human judgments. This meta-evaluation reveals surprising facts about the most commonly used methodologies.
On the convergence properties of a K-step averaging stochastic gradient descent algorithm for nonconvex optimization
We adopt and analyze a synchronous K-step averaging stochastic gradient descent algorithm which we call K-AVG for solving large scale machine learning problems. We establish the convergence results of K-AVG for nonconvex objectives. Our analysis of K-AVG applies to many existing variants of synchronous SGD. We explain why the Kstep delay is necessary and leads to better performance than traditional parallel stochastic gradient descent which is equivalent to K-AVG withK = 1. We also show that K-AVG scales better with the number of learners than asynchronous stochastic gradient descent (ASGD). Another advantage of K-AVG over ASGD is that it allows larger stepsizes and facilitates faster convergence. On a cluster of 128 GPUs, K-AVG is faster than ASGD implementations and achieves better accuracies and faster convergence for training with the CIFAR-10 dataset.
A social network-aware top-N recommender system using GPU
A book recommender system is very useful for a digital library. Good book recommender systems can effectively help users find interesting and relevant books from the massive resources, by providing individual recommendation book list for each end-user. By now, a variety of collaborative filtering algorithms have been invented, which are the cores of most recommender systems. However, because of the explosion of information, especially in the Internet, the improvement of the efficiency of the collaborative filting (CF) algorithm becomes more and more important. In this paper, we first propose a parallel Top-N recommendation algorithm in CUDA (Compute Unified Device Architecture) which combines the collaborative filtering and trust-based approach to deal with the cold-start user problem. Then based on this algorithm, we present a parallel book recommender system on a GPU (Graphics Processor unit) for CADAL digital library platform. Our experimental results show our algorithm is very efficient to process the large-scale datasets with good accuracy, and we report the impact of different values of parameters on the recommendation performance.
Customer loyalty in e-commerce : an exploration of its antecedents and consequences
This paper investigates the antecedents and consequences of customer loyalty in an online business-to-consumer (B2C) context. We identify eight factors (the 8Cs—customization, contact interactivity, care, community, convenience, cultivation, choice, and character) that potentially impact e-loyalty and develop scales to measure these factors. Data collected from 1,211 online customers demonstrate that all these factors, except convenience, impact e-loyalty. The data also reveal that e-loyalty has an impact on two customer-related outcomes: word-ofmouth promotion and willingness to pay more. © 2002 by New York University. All rights reserved.
Mediator Effects of Positive Emotions on Social Support and Depression among Adolescents Suffering from Mobile Phone Addiction.
BACKGROUND Depression is a common mental disorder that is widely seen among adolescents suffering from mobile phone addiction. While it is well known that both positive emtions in adolescents wiotions and social support can have a positive impact by helping individuals to maintain a positive attitude, the correlation between positive emotions, social support, and depression among these adolescents remains to be investigated. This study examined the mediator effects of positive emotions on the relationship between social support and depression among adolescents suffering from mobile phone addiction. SUBJECTS AND METHODS For this study, conducted in 2016, we selected 1,346 adolescent students from three middle schools (ranging from Junior Grade One to Senior Grade Three) in Hunan Province of China, to participate in the survey. Participants were selected using the stratified cluster random sampling method, and all participants remained anonymous throughout the study. Each participant completed the Self-made General Situation Questionnaire, the Social Support Rating Scale, the Positive and Negative Affect Schedule, the Center for Epidemiological Studies Depression Scale, and the Mobile Phone Addiction Tendency Scale. RESULTS There was significant positive correlation between positive emotions and social support. Both positive emotions and social support demonstrated significant negative correlation with depression. Positive emotions had partial mediator effects on the relationship between social support and depression (P<0.01). CONCLUSIONS Both social support and positive emotions can lower levels of depression among adolescents suffering from mobile phone addiction. Social support contributes to positive emoth mobile phone addiction, thereby reducing their levels of depression. These findings suggest that more support and care should be given to this particular adolescent population.
Development of the Therapist Empathy Scale.
BACKGROUND Few measures exist to examine therapist empathy as it occurs in session. AIMS A 9-item observer rating scale, called the Therapist Empathy Scale (TES), was developed based on Watson's (1999) work to assess affective, cognitive, attitudinal, and attunement aspects of therapist empathy. The aim of this study was to evaluate the inter-rater reliability, internal consistency, and construct and criterion validity of the TES. METHOD Raters evaluated therapist empathy in 315 client sessions conducted by 91 therapists, using data from a multi-site therapist training trial (Martino et al., 2010) in Motivational Interviewing (MI). RESULTS Inter-rater reliability (ICC = .87 to .91) and internal consistency (Cronbach's alpha = .94) were high. Confirmatory factor analyses indicated some support for single-factor fit. Convergent validity was supported by correlations between TES scores and MI fundamental adherence (r range .50 to .67) and competence scores (r range .56 to .69). Discriminant validity was indicated by negative or nonsignificant correlations between TES and MI-inconsistent behavior (r range .05 to -.33). CONCLUSIONS The TES demonstrates excellent inter-rater reliability and internal consistency. RESULTS indicate some support for a single-factor solution and convergent and discriminant validity. Future studies should examine the use of the TES to evaluate therapist empathy in different psychotherapy approaches and to determine the impact of therapist empathy on client outcome.
Online Learning Rate Adaptation with Hypergradient Descent
We introduce a general method for improving the convergence rate of gradientbased optimizers that is easy to implement and works well in practice. We demonstrate the effectiveness of the method in a range of optimization problems by applying it to stochastic gradient descent, stochastic gradient descent with Nesterov momentum, and Adam, showing that it significantly reduces the need for the manual tuning of the initial learning rate for these commonly used algorithms. Our method works by dynamically updating the learning rate during optimization using the gradient with respect to the learning rate of the update rule itself. Computing this “hypergradient” needs little additional computation, requires only one extra copy of the original gradient to be stored in memory, and relies upon nothing more than what is provided by reverse-mode automatic differentiation.
Factors influencing the adoption of internet banking: An integration of TAM and TPB with perceived risk and perceived benefit
Online banking (Internet banking) has emerged as one of the most profitable e-commerce applications over the last decade. Although several prior research projects have focused on the factors that impact on the adoption of information technology or Internet, there is limited empirical work which simultaneously captures the success factors (positive factors) and resistance factors (negative factors) that help customers to adopt online banking. This paper explores and integrates the various advantages of online banking to form a positive factor named perceived benefit. In addition, drawing from perceived risk theory, five specific risk facets – financial, security/privacy, performance, social and time risk – are synthesized with perceived benefit as well as integrated with the technology acceptance model (TAM) and theory of planned behavior (TPB) model to propose a theoretical model to explain customers’ intention to use online banking. The results indicated that the intention to use online banking is adversely affected mainly by the security/privacy risk, as well as financial risk and is positively affected mainly by perceived benefit, attitude and perceived usefulness. The implications of integrating perceived benefit and perceived risk into the proposed online banking adoption model are discussed. 2008 Elsevier B.V. All rights reserved.
DISRUPTIVE INNOVATION and CATALYTIC CHANGE in Higher Education
NOTEBOOK NOTEBOOK T he downfall of many successful and seemingly invincible companies has been precipitated by a disruptive innovation—that is, an innovation that makes a complicated and expensive product simpler and cheaper and thereby attracts a new set of customers. Business Administration at the Harvard Business School, describes how disruptive companies establish a foothold in the market, expand that market dramatically, and then inexorably migrate up the quality chain. Ultimately they pin the original leaders in the highest tiers of the market, where there simply is not enough volume to sustain them all. Christensen extends his theory from the business realm to higher education. Online business courses, for example, now offer lower-end and more convenient access to courses that can improve students' credentials or help them switch careers—which is often precisely what the student customers want to accomplish by enrolling. Traditional colleges and universities don't consider themselves in competition with these new entrants, but in the process of retreating from them they risk becoming more and more out of touch with the mainstream and, therefore, increasingly irrelevant. Disruptive companies typically establish a foothold in the market, expand that market dramatically, and then inexorably migrate up the quality chain. Ultimately they pin the original leaders in the highest tiers of the market, where there simply is not enough volume to sustain them all. Harvard Business School is being disrupted at the bott om of its core market by corporations that are sett ing up their own universities for their best employees. HBS still holds the advantage in providing networking, connections, and brand. No institutional mission fo-cuses on these jobs, however. Colleges and universities are being disrupted in many ways similar to Harvard Business School. Th e nation' s top institutions—the original leaders in the higher education market—are moving up the quality chain and losing touch with the mainstream. It is time that they completely rethink their model.
Stressed out? Make some modifications!
Stress granules and processing bodies are related mRNA-containing granules implicated in controlling mRNA translation and decay. A genomic screen identifies numerous factors affecting granule formation, including proteins involved in O-GlcNAc modifications. These results highlight the importance of post-translational modifications in translational control and mRNP granule formation.
Adversarial nets with perceptual losses for text-to-image synthesis
Recent approaches in generative adversarial networks (GANs) can automatically synthesize realistic images from descriptive text. Despite the overall fair quality, the generated images often expose visible flaws that lack structural definition for an object of interest. In this paper, we aim to extend state of the art for GAN-based text-to-image synthesis by improving perceptual quality of generated images. Differentiated from previous work, our synthetic image generator optimizes on perceptual loss functions that measure pixel, feature activation, and texture differences against a natural image. We present visually more compelling synthetic images of birds and flowers generated from text descriptions in comparison to some of the most prominent existing work.
Radiologic mimics of cirrhosis.
OBJECTIVE The objective of this article is to provide a practical review of the conditions other than cirrhosis that can result in diffuse surface nodularity of the liver or portal hypertension. CONCLUSION Conditions that can mimic cirrhosis on imaging include pseudocirrhosis of treated breast cancer metastases to the liver, fulminant hepatic failure, miliary metastases, sarcoidosis, schistosomiasis, congenital hepatic fibrosis, idiopathic portal hypertension, early primary biliary cirrhosis, chronic Budd-Chiari syndrome, chronic portal vein thrombosis, and nodular regenerative hyperplasia.
Baseline clinical status as a predictor of methylprednisolone response in multiple sclerosis relapses.
BACKGROUND To date, there are no available factors to predict the outcome after multiple sclerosis relapse. AIM To investigate factors that may be useful for predicting response to methylprednisolone treatment, following a relapse of multiple sclerosis (MS). METHODS The study included 48 MS patients enrolled in a double-blind multicenter trial to receive intravenous versus oral high-dose methylprednisolone treatment. Associations were sought between the disability status prior to relapse and the relapse severity, determined by changes in the Expanded Disability Status Scale (EDSS) score, as well as the improvements after treatment. We also analyzed the relationships between the number of magnetic resonance imaging (MRI) gadolinium-enhancing lesions (Gd+) and improvement. RESULTS A higher EDSS score before relapse was associated with more severe relapses (p = 0.04) and less marked improvement (odds ratio (OR) 1.8; 95% CI (1.2-2.2); p = 0.05) after methylprednisolone treatment. Relapse severity (p = 0.29) and the number of Gd+ lesions at relapse (p = 0.41) were not related with improvement. CONCLUSIONS Clinical baseline status prior to MS relapse is a predictor of response to methylprednisolone treatment.
PAC-learning in the presence of evasion adversaries
The existence of evasion attacks during the test phase of machine learning algorithms represents a significant challenge to both their deployment and understanding. These attacks can be carried out by adding imperceptible perturbations to inputs to generate adversarial examples and finding effective defenses and detectors has proven to be difficult. In this paper, we step away from the attack-defense arms race and seek to understand the limits of what can be learned in the presence of an evasion adversary. In particular, we extend the Probably Approximately Correct (PAC)-learning framework to account for the presence of an adversary. We first define corrupted hypothesis classes which arise from standard binary hypothesis classes in the presence of an evasion adversary and derive the Vapnik-Chervonenkis (VC)-dimension for these, denoted as the adversarial VC-dimension. We then show that sample complexity upper bounds from the Fundamental Theorem of Statistical learning can be extended to the case of evasion adversaries, where the sample complexity is controlled by the adversarial VC-dimension. We then explicitly derive the adversarial VC-dimension for halfspace classifiers in the presence of a sample-wise norm-constrained adversary of the type commonly studied for evasion attacks and show that it is the same as the standard VC-dimension, closing an open question. Finally, we prove that the adversarial VC-dimension can be either larger or smaller than the standard VC-dimension depending on the hypothesis class and adversary, making it an interesting object of study in its own right.
A Probabilistic Model for Semantic Word Vectors
Vector representations of words capture relationships in words’ functions and meanings. Many existing techniques for inducing such representations from data use a pipeline of hand-coded processing techniques. Neural language models offer principled techniques to learn word vectors using a probabilistic modeling approach. However, learning word vectors via language modeling produces representations with a syntactic focus, where word similarity is based upon how words are used in sentences. In this work we wish to learn word representations to encode word meaning – semantics. We introduce a model which learns semantically focused word vectors using a probabilistic model of documents. We evaluate the model’s word vectors in two tasks of sentiment analysis.
Vectorized Bloom filters for advanced SIMD processors
Analytics are at the core of many business intelligence tasks. Efficient query execution is facilitated by advanced hardware features, such as multi-core parallelism, shared-nothing low-latency caches, and SIMD vector instructions. Only recently, the SIMD capabilities of mainstream hardware have been augmented with wider vectors and non-contiguous loads termed gathers. While analytical DBMSs minimize the use of indexes in favor of scans based on sequential memory accesses, some data structures remain crucial. The Bloom filter, one such example, is the most efficient structure for filtering tuples based on their existence in a set and its performance is critical when joining tables with vastly different cardinalities. We introduce a vectorized implementation for probing Bloom filters based on gathers that eliminates conditional control flow and is independent of the SIMD length. Our techniques are generic and can be reused for accelerating other database operations. Our evaluation indicates a significant performance improvement over scalar code that can exceed 3X when the Bloom filter is cache-resident.
Recent status scores for version 6 of the Addiction Severity Index (ASI-6).
AIMS To describe the derivation of recent status scores (RSSs) for version 6 of the Addiction Severity Index (ASI-6). DESIGN 118 ASI-6 recent status items were subjected to nonparametric item response theory (NIRT) analyses followed by confirmatory factor analysis (CFA). Generalizability and concurrent validity of the derived scores were determined. SETTING AND PARTICIPANTS A total of 607 recent admissions to variety of substance abuse treatment programs constituted the derivation sample; a subset (n = 252) comprised the validity sample. MEASUREMENTS The ASI-6 interview and a validity battery of primarily self-report questionnaires that included at least one measure corresponding to each of the seven ASI domains were administered. FINDINGS Nine summary scales describing recent status that achieved or approached both high scalability and reliability were derived; one scale for each of six areas (medical, employment/finances, alcohol, drug, legal, psychiatric) and three scales for the family/social area. Intercorrelations among the RSSs also supported the multi-dimensionality of the ASI-6. Concurrent validity analyses yielded strong evidence supporting the validity of six of the RSSs (medical, alcohol, drug, employment, family/social problems, psychiatric). Evidence was weaker for the legal, family/social support and child problems RSSs. Generalizability analyses of the scales to males versus females and whites versus blacks supported the comparability of the findings, with slight exceptions. CONCLUSIONS The psychometric analyses to derive Addiction Severity Index version 6 recent status scores support the multi-dimensionality of the Addiction Severity Index version 6 (i.e. the relative independence of different life functioning areas), consistent with research on earlier editions of the instrument. In general, the Addiction Severity Index version 6 scales demonstrate acceptable scalability, reliability and concurrent validity. While questions remain about the generalizability of some scales to population subgroups, the overall findings coupled with updated and more extensive content in the Addiction Severity Index version 6 support its use in clinical practice and research.
A Survey of Colormaps in Visualization
Colormaps are a vital method for users to gain insights into data in a visualization. With a good choice of colormaps, users are able to acquire information in the data more effectively and efficiently. In this survey, we attempt to provide readers with a comprehensive review of colormap generation techniques and provide readers a taxonomy which is helpful for finding appropriate techniques to use for their data and applications. Specifically, we first briefly introduce the basics of color spaces including color appearance models. In the core of our paper, we survey colormap generation techniques, including the latest advances in the field by grouping these techniques into four classes: procedural methods, user-study based methods, rule-based methods, and data-driven methods; we also include a section on methods that are beyond pure data comprehension purposes. We then classify colormapping techniques into a taxonomy for readers to quickly identify the appropriate techniques they might use. Furthermore, a representative set of visualization techniques that explicitly discuss the use of colormaps is reviewed and classified based on the nature of the data in these applications. Our paper is also intended to be a reference of colormap choices for readers when they are faced with similar data and/or tasks.
Peristomal moisture-associated skin damage in adults with fecal ostomies: a comprehensive review and consensus.
Approximately 1 million persons living in North America have an ostomy, and approximately 70% will experience stomal or peristomal complications. The most prevalent of these complications is peristomal skin damage, and the most common form of peristomal skin damage occurs when the skin is exposed to effluent from the ostomy, resulting in inflammation and erosion of the skin. Despite its prevalence, research-based evidence related to the assessment, prevention, and management of peristomal moisture-associated skin damage is sparse, and current practice is largely based on expert opinion. In order to address current gaps in clinical evidence and knowledge of this condition, a group of WOC and enterostomal therapy nurses with expertise in ostomy care was convened in 2012. This article summarizes results from the panel's literature review and summarizes consensus-based statements outlining best practices for the assessment, prevention, and management of peristomal moisture-associated dermatitis among patients with fecal ostomies.
Multi-Perspective Context Matching for Machine Comprehension
Previous machine comprehension (MC) datasets are either too small to train endto-end deep learning models, or not difficult enough to evaluate the ability of current MC techniques. The newly released SQuAD dataset alleviates these limitations, and gives us a chance to develop more realistic MC models. Based on this dataset, we propose a Multi-Perspective Context Matching (MPCM) model, which is an end-to-end system that directly predicts the answer beginning and ending points in a passage. Our model first adjusts each word-embedding vector in the passage by multiplying a relevancy weight computed against the question. Then, we encode the question and weighted passage by using bi-directional LSTMs. For each point in the passage, our model matches the context of this point against the encoded question from multiple perspectives and produces a matching vector. Given those matched vectors, we employ another bi-directional LSTM to aggregate all the information and predict the beginning and ending points. Experimental result on the test set of SQuAD shows that our model achieves a competitive result on the leaderboard.
Phishing through social bots on Twitter
This work investigates how social bots can phish employees of organizations, and thus endanger corporate network security. Current literature mostly focuses on traditional phishing methods (through e-mail, phone calls, and USB sticks). We address the serious organizational threats and security risks caused by phishing through online social media, specifically through Twitter. This paper first provides a review of current work. It then describes our experimental development, in which we created and deployed eight social bots on Twitter, each associated with one specific subject. For a period of four weeks, each bot published tweets about its subject and followed people with similar interests. In the final two weeks, our experiment showed that 437 unique users could have been phished, 33 of which visited our website through the network of an organization. Without revealing any sensitive or real data, the paper analyses some findings of this experiment and addresses further plans for research in this area.
Fingerprint Recognition Using Minutia Score Matching
The popular Biometric used to authenticate a person is Fingerprint which is unique and permanent throughout a person’s life. A minutia matching is widely used for fingerprint recognition and can be classified as ridge ending and ridge bifurcation. In this paper we projected Fingerprint Recognition using Minutia Score Matching method (FRMSM). For Fingerprint thinning, the Block Filter is used, which scans the image at the boundary to preserves the quality of the image and extract the minutiae from the thinned image. The false matching ratio is better compared to the existing algorithm. Key-words:-Fingerprint Recognition, Binarization, Block Filter Method, Matching score and Minutia.
Intraoperative heparin in addition to postoperative low-molecular-weight heparin for thromboprophylaxis in total knee replacement.
The administration of heparin during operation has been reported to enhance the efficacy of thromboprophylaxis in patients undergoing total hip replacement. We have performed a small pilot study in which intraoperative doses of heparin were given in addition to the usual postoperative thromboprophylaxis with enoxaparin in 32 patients undergoing total knee replacement. The primary endpoint was deep-vein thrombosis (DVT) as demonstrated by bilateral venography on 6 +/- 2 days after operation. Sixteen patients developed DVT; in two the thrombosis was proximal as well as distal and in one the occurrence was bilateral. There was one major haemorrhage. These results are similar to those obtained with the use of postoperative thromboprophylaxis with enoxaparin alone. They do not provide support for the initiation of a larger randomised trial of this approach to management.
Solutions Strategies for Die Shift Problem in Wafer Level Compression Molding
Die shift problem that arises during the wafer molding process in embedded micro wafer level package fabrication was systematically analyzed and solution strategies were developed. A methodology to measure die shift was developed and applied to create maps of die shift on an 8 inch wafer. A total of 256 dies were embedded in an 8 inch mold compound wafer using compression molding. Thermal and cure shrinkages of mold compound are determined to be the primary reasons for die shift in wafer molding. Die shift value increases as the distance from the center of the wafer increases. Pre-compensation of die shift during pick and place is demonstrated as an effective method to control die shift. Applying pre-compensation method 99% of dies can be achieved to have die shift values of less than 40 μm. Usage of carrier wafer during wafer molding reduces the maximum die shift in a wafer from 633 μm to 79 μm. Die area/package area ratio has a strong influence on the die shift values. Die area/package area ratios of 0.81, 0.49, and 0.25 lead to maximum die shift values of 26, 76, and 97 μ.m, respectively. Wafer molding using low coefficient of thermal expansion (7 × 10-6/°C) and low cure shrinkage (0.094%) mold compounds is demonstrated to yield maximum die shift value of 28 μm over the whole 8 inch wafer area.
Generalized Support Vector Machines
By setting apart the two functions of a support vector machine: separation of points by a nonlinear surface in the original space of patterns, and maximizing the distance between separating planes in a higher dimensional space, we are able to deene indeenite, possibly discontinuous, kernels, not necessarily inner product ones, that generate highly nonlin-ear separating surfaces. Maximizing the distance between the separating planes in the higher dimensional space is surrogated by support vector suppression, which is achieved by minimizing any desired norm of support vector multipliers. The norm may be one induced by the separation kernel if it happens to be positive deenite, or a Euclidean or a polyhe-dral norm. The latter norm leads to a linear program whereas the former norms lead to convex quadratic programs, all with an arbitrary separation kernel. A standard support vector machine can be recovered by using the same kernel for separation and support vector suppression. On a simple test example, all models perform equally well when a positive deenite kernel is used. When a negative deenite kernel is used, we are unable to solve the nonconvex quadratic program associated with a conventional support vector machine, while all other proposed models remain convex and easily generate a surface that separates all given points.
Predicting target vessel revascularization in older patients undergoing percutaneous coronary intervention in the drug-eluting stent era.
BACKGROUND The contemporary need for repeat revascularization in older patients after percutaneous coronary intervention (PCI) has not been well studied. Understanding repeat revascularization risk in this population may inform treatment decisions. METHODS We analyzed patients ≥65 years old undergoing native-vessel PCI of de novo lesions from 2005 to 2009 discharged alive using linked CathPCI Registry and Medicare data. Repeat PCIs within 1 year of index procedure were identified by claims data and linked back to CathPCI Registry to identify target vessel revascularization (TVR). Surgical revascularization and PCIs not back linked to CathPCI Registry were excluded from main analyses but included in sensitivity analyses. Independent predictors of TVR after drug-eluting stent (DES) or bare-metal stent (BMS) implantation were identified by multivariable logistic regression. RESULTS Among 343,173 PCI procedures, DES was used in 76.5% (n = 262,496). One-year TVR ranged from 3.3% (overall) to 7.1% (sensitivity analysis). Precatheterization and additional procedure-related TVR risk models were developed in BMS (c-indices 0.54, 0.60) and DES (c-indices 0.57, 0.60) populations. Models were well calibrated and performed similarly in important patient subgroups (female, diabetic, and older [≥75 years]). The use of DES reduced predicted TVR rates in high-risk older patients by 35.5% relative to BMS (from 6.2% to 4.0%). Among low-risk patients, the number needed to treat with DES to prevent 1 TVR was 63-112; among high-risk patients, this dropped to 28-46. CONCLUSIONS In contemporary clinical practice, native-vessel TVR among older patients occurs infrequently. Our prediction model identifies patients at low versus high TVR risk and may inform clinical decision making.
Simulation and evaluation of urban bus-networks using a multiagent approach
Evolution of public road transportation systems requires analysis and planning tools to improve service quality. A wide range of road transportation simulation tools exist with a variety of applications in planning, training and demonstration. However, few simulation models take into account traveler behaviors and vehicle operation specific to public transportation. We present in this paper a bus network simulation tools which include these specificities and allows to analyze and evaluate a bus network at diverse space and time scales. We adopt a multiagent approach to describe the global system operation as behaviors of numerous autonomous entities such as buses and travelers.
The effects of consuming a high protein diet (4.4 g/kg/d) on body composition in resistance-trained individuals
BACKGROUND The consumption of dietary protein is important for resistance-trained individuals. It has been posited that intakes of 1.4 to 2.0 g/kg/day are needed for physically active individuals. Thus, the purpose of this investigation was to determine the effects of a very high protein diet (4.4 g/kg/d) on body composition in resistance-trained men and women. METHODS Thirty healthy resistance-trained individuals participated in this study (mean ± SD; age: 24.1 ± 5.6 yr; height: 171.4 ± 8.8 cm; weight: 73.3 ± 11.5 kg). Subjects were randomly assigned to one of the following groups: Control (CON) or high protein (HP). The CON group was instructed to maintain the same training and dietary habits over the course of the 8 week study. The HP group was instructed to consume 4.4 grams of protein per kg body weight daily. They were also instructed to maintain the same training and dietary habits (e.g. maintain the same fat and carbohydrate intake). Body composition (Bod Pod®), training volume (i.e. volume load), and food intake were determined at baseline and over the 8 week treatment period. RESULTS The HP group consumed significantly more protein and calories pre vs post (p < 0.05). Furthermore, the HP group consumed significantly more protein and calories than the CON (p < 0.05). The HP group consumed on average 307 ± 69 grams of protein compared to 138 ± 42 in the CON. When expressed per unit body weight, the HP group consumed 4.4 ± 0.8 g/kg/d of protein versus 1.8 ± 0.4 g/kg/d in the CON. There were no changes in training volume for either group. Moreover, there were no significant changes over time or between groups for body weight, fat mass, fat free mass, or percent body fat. CONCLUSIONS Consuming 5.5 times the recommended daily allowance of protein has no effect on body composition in resistance-trained individuals who otherwise maintain the same training regimen. This is the first interventional study to demonstrate that consuming a hypercaloric high protein diet does not result in an increase in body fat.
GPU-based parallel collision detection for fast motion planning
We present parallel algorithms to accelerate collision queries for sample-based motion planning. Our approach is designed for current many-core GPUs and exploits data-parallelism and multithreaded capabilities. In order to take advantage of high numbers of cores, we present a clustering scheme and collision-packet traversal to perform efficient collision queries on multiple configurations simultaneously. Furthermore, we present a hierarchical traversal scheme that performs workload balancing for high parallel efficiency. We have implemented our algorithms on commodity NVIDIA GPUs using CUDA and can perform 500, 000 collision queries per second on our benchmarks, which is 10X faster than prior GPU-based techniques. Moreover, we can compute collisionfree paths for rigid and articulated models in less than 100 milliseconds for many benchmarks, almost 50-100X faster than current CPU-based PRM planners.
Graph analytics using vertica relational database
Graph analytics is becoming increasingly popular, with a number of new applications and systems developed in the past few years. In this paper, we study Vertica relational database as a platform for graph analytics. We show that vertex-centric graph analysis can be translated to SQL queries, typically involving table scans and joins, and that modern column-oriented databases are very well suited to running such queries. Furthermore, we show how developers can trade memory footprint for significantly reduced I/O costs in Vertica. We present an experimental evaluation of the Vertica relational database system on a variety of graph analytics, including iterative analysis, a combination of graph and relational analyses, and more complex 1-hop neighborhood graph analytics, showing that it is competitive to two popular vertex-centric graph analytics systems, namely Giraph and GraphLab.
Globalization, Urbanization and Nutritional Change in the Developing World
Urbanization and globalization may enhance access to non traditional foods as a result of changing prices and production practices, as well as trade and marketing practices. These forces have influenced dietary patterns throughout the developing world. Longitudinal case study data from China indicate that consumption patterns closely reflect changes in availability, and that potentially obesogenic dietary patterns are emerging, with especially large changes in rural areas with high levels of urban infrastructure and resources. Recent data on women from 36 developing countries illustrate that these dietary shifts may have implications for overweight/obesity in urban and rural settings. These data emphasize the importance of developing country policies that include preventive measures to minimize further adverse shifts in diet and activity, and risk of continued rises in overweight.
An Improved Pulse Width Modulation Method for Chopper-Cell-Based Modular Multilevel Converters
In this paper, an improved pulse width modulation (PWM) method for chopper-cell (or half-bridge)based modular multilevel converters (MMCs) is proposed. This method can generate an output voltage with maximally 2N+1 (where N is the number of submodules in the upper or lower arm of MMC) levels, which is as great as that of the carrier-phase-shifted PWM (CPSPWM) method. However, no phase-shifted carrier is needed. Compared with the existing submodule unified pulse width modulated (SUPWM) method, the level number of the output voltage is almost doubled and the height of the step in the staircase voltage is reduced by 50%. Meanwhile, the equivalent switching frequency in the output voltage is twice that of the conventional SUPWM method. All these features lead to much reduced harmonic content in the output voltage. What is more, the voltages of the submodule capacitors can be well balanced without any close-loop voltage balancing controllers which are mandatory in the CPSPWM schemes. Simulation and experimental results on a MMC-based inverter show validity of the proposed method.
High-fidelity joint drive system by torque feedback control using high precision linear encoder
When robots cooperate with humans it is necessary for robots to move safely on sudden impact. Joint torque sensing is vital for robots to realize safe behavior and enhance physical performance. Firstly, this paper describes a new torque sensor with linear encoders which demonstrates electro magnetic noise immunity and is unaffected temperature changes. Secondly, we propose a friction compensation method using a disturbance observer to improve the positioning accuracy. In addition, we describe a torque feedback control method which scales down the motor inertia and enhances the joint flexibility. Experimental results of the proposed controller are presented.
Genetic algorithms and VRP : the behaviour of a crossover operator
In the paper, we investigate the crossover operators for a vehicle routing problem where only feasible solutions are taken into account. New crossover operators are proposed that are based on the common sequence in the parent solutions. Random insertion heuristic is used as a reconstruction method in a crossover operator to preserve stochastic characteristics of the genetic algorithm. The genetic algorithm together with the new crossover operators can be applied to different VRP problems or other problems that can be expressed as a graph and depend on a sequence of elements. The proposed crossover operators are compared with other crossovers that deal with feasible solutions and insertion heuristics.
Transforming Cooling Optimization for Green Data Center via Deep Reinforcement Learning
Cooling system plays a critical role in a modern data center (DC). Developing an optimal control policy for DC cooling system is a challenging task. The prevailing approaches often rely on approximating system models that are built upon the knowledge of mechanical cooling, electrical and thermal management, which is difficult to design and may lead to suboptimal or unstable performances. In this paper, we propose utilizing the large amount of monitoring data in DC to optimize the control policy. To do so, we cast the cooling control policy design into an energy cost minimization problem with temperature constraints, and tap it into the emerging deep reinforcement learning (DRL) framework. Specifically, we propose an end-toend cooling control algorithm (CCA) that is based on the actorcritic framework and an off-policy offline version of the deep deterministic policy gradient (DDPG) algorithm. In the proposed CCA, an evaluation network is trained to predict an energy cost counter penalized by the cooling status of the DC room, and a policy network is trained to predict optimized control settings when gave the current load and weather information. The proposed algorithm is evaluated on the EnergyPlus simulation platform and on a real data trace collected from the National Super Computing Centre (NSCC) of Singapore. Our results show that the proposed CCA can achieve about 11% cooling cost saving on the simulation platform compared with a manually configured baseline control algorithm. In the trace-based study, we propose a de-underestimation validation mechanism as we cannot directly test the algorithm on a real DC. Even though with DUE the results are conservative, we can still achieve about 15% cooling energy saving on the NSCC data trace if we set the inlet temperature threshold at 26.6 degree Celsius.
The First International Workshop on Management and Economics of Software Product Lines (MESPUL07)
The first international workshop on management and economics of software product lines will bring together researchers and practitioners from academia, industry and governments to report and discuss the challenges and opportunities of adopting and managing software product lines from managerial, organizational, and economics point of view.
Physiological and proteomic approaches to address the active role of ozone in kiwifruit post-harvest ripening
Post-harvest ozone application has recently been shown to inhibit the onset of senescence symptoms on fleshy fruit and vegetables; however, the exact mechanism of action is yet unknown. To characterize the impact of ozone on the post-harvest performance of kiwifruit (Actinidia deliciosa cv. 'Hayward'), fruits were cold stored (0 °C, 95% relative humidity) in a commercial ethylene-free room for 1, 3, or 5 months in the absence (control) or presence of ozone (0.3 μl l(-1)) and subsequently were allowed to ripen at a higher temperature (20 °C), herein defined as the shelf-life period, for up to 12 days. Ozone blocked ethylene production, delayed ripening, and stimulated antioxidant and anti-radical activities of fruits. Proteomic analysis using 1D-SDS-PAGE and mass spectrometry identified 102 kiwifruit proteins during ripening, which are mainly involved in energy, protein metabolism, defence, and cell structure. Ripening induced protein carbonylation in kiwifruit but this effect was depressed by ozone. A set of candidate kiwifruit proteins that are sensitive to carbonylation was also discovered. Overall, the present data indicate that ozone improved kiwifruit post-harvest behaviour, thus providing a first step towards understanding the active role of this molecule in fruit ripening.
Optimizing R VM: Allocation Removal and Path Length Reduction via Interpreter-level Specialization
The performance of R, a popular data analysis language, was never properly understood. Some claimed their R codes ran as efficiently as any native code, others quoted orders of magnitude slowdown of R codes with respect to equivalent C implementations. We found both claims to be true depending on how an R code is written. This paper introduces a first classification of R programming styles into Type I (looping over data), Type II (vector programming), and Type III (glue codes). The most serious overhead of R are mostly manifested on Type I R codes, whereas many Type III R codes can be quite fast. This paper focuses on improving the performance of Type I R codes. We propose the ORBIT VM, an extension of the GNU R VM, to perform aggressive removal of allocated objects and reduction of instruction path lengths in the GNU R VM via profile-driven specialization techniques. The ORBIT VM is fully compatible with the R language and is purely based on interpreted execution. It is a specialization JIT and runtime focusing on data representation specialization and operation specialization. For our benchmarks of Type I R codes, ORBIT is able to achieve an average of 3.5X speedups over the current release of GNU R VM and outperforms most other R optimization projects that are currently available.
State-based decoding of hand and finger kinematics using neuronal ensemble and LFP activity during dexterous reach-to-grasp movements.
The performance of brain-machine interfaces (BMIs) that continuously control upper limb neuroprostheses may benefit from distinguishing periods of posture and movement so as to prevent inappropriate movement of the prosthesis. Few studies, however, have investigated how decoding behavioral states and detecting the transitions between posture and movement could be used autonomously to trigger a kinematic decoder. We recorded simultaneous neuronal ensemble and local field potential (LFP) activity from microelectrode arrays in primary motor cortex (M1) and dorsal (PMd) and ventral (PMv) premotor areas of two male rhesus monkeys performing a center-out reach-and-grasp task, while upper limb kinematics were tracked with a motion capture system with markers on the dorsal aspect of the forearm, hand, and fingers. A state decoder was trained to distinguish four behavioral states (baseline, reaction, movement, hold), while a kinematic decoder was trained to continuously decode hand end point position and 18 joint angles of the wrist and fingers. LFP amplitude most accurately predicted transition into the reaction (62%) and movement (73%) states, while spikes most accurately decoded arm, hand, and finger kinematics during movement. Using an LFP-based state decoder to trigger a spike-based kinematic decoder [r = 0.72, root mean squared error (RMSE) = 0.15] significantly improved decoding of reach-to-grasp movements from baseline to final hold, compared with either a spike-based state decoder combined with a spike-based kinematic decoder (r = 0.70, RMSE = 0.17) or a spike-based kinematic decoder alone (r = 0.67, RMSE = 0.17). Combining LFP-based state decoding with spike-based kinematic decoding may be a valuable step toward the realization of BMI control of a multifingered neuroprosthesis performing dexterous manipulation.
Fast Exact Inference with a Factored Model for Natural Language Parsing
We presenta novel generati ve model for natural languagetree structuresin whichsemantic(lexical dependenc y) andsyntacticstructuresare scoredwith separatemodels.Thisfactorizationprovidesconceptual simplicity, straightforwardopportunitiesfor separatelyimproving the componentmodels,anda level of performancealreadycloseto thatof similar, non-factoredmodels.Most importantly, unlikeothermodernparsing models,thefactoredmodeladmitsanextremelyeffectiveA parsingalgorithm,which makesefficient,exactinferencefeasible.
Regular polygon detection
This paper describes a new robust regular polygon detector. The regular polygon transform is posed as a mixture of regular polygons in a five dimensional space. Given the edge structure of an image, we derive the a posteriori probability for a mixture of regular polygons, and thus the probability density function for the appearance of a mixture of regular polygons. Likely regular polygons can be isolated quickly by discretising and collapsing the search space into three dimensions. The remaining dimensions may be efficiently recovered subsequently using maximum likelihood at the locations of the most likely polygons in the subspace. This leads to an efficient algorithm. Also the a posteriori formulation facilitates inclusion of additional a priori information leading to real-time application to road sign detection. The use of gradient information also reduces noise compared to existing approaches such as the generalised Hough transform. Results are presented for images with noise to show stability. The detector is also applied to two separate applications: real-time road sign detection for on-line driver assistance; and feature detection, recovering stable features in rectilinear environments.
The association between academic engagement and achievement in health sciences students
BACKGROUND Educational institutions play an important role in encouraging student engagement, being necessary to know how engaged are students at university and if this factor is involved in student success point and followed.To explore the association between academic engagement and achievement. METHODS Cross-sectional study. The sample consisted of 304 students of Health Sciences. They were asked to fill out an on-line questionnaire. Academic achievements were calculated using three types of measurement. RESULTS Positive correlations were found in all cases. Grade point average was the academic rate most strongly associated with engagement dimensions and this association is different for male and female students. The independent variables could explain between 18.9 and 23.9% of the variance (p < 0.05) in the population of university students being analyzed. CONCLUSIONS Engagement has been shown to be one of the many factors, which are positively involved, in the academic achievements of college students.
PipeTron series - Robots for pipe inspection
Pipes are present in most of the infrastructure around us - in refineries, chemical plants, power plants, not to mention sewer, gas and water distribution networks. Inspection of these pipes is extremely important, as failures may result in catastrophic accidents with loss of lives. However, inspection of small pipes (from 3 to 6 inches) is usually neglected or performed only partially due to the lack of satisfactory tools. This paper introduces a new series of robots named PipeTron, developed especially for inspection of pipes in refineries and power plants. The mobility concept and design of each version will be described, follower by results of field deployment and considerations for future improvements.
Reservoir Computing: Quo Vadis?
Reservoir Computing (RC) is an umbrella term for adaptive computational paradigms that rely on an excitable dynamical system, also called the "reservoir." The paradigms have been shown to be particularly promising for temporal signal processing. RC was also explored as a potential candidate for emerging nanoscale architectures. In this article we reflect on the current state of RC and muse about its future. In particular, we propose a set of open problems that we think need to be addressed in order to make RC more mainstream.
A Tool to Assess the Comfort of Wearable Computers
Wearable computer comfort can be affected by numerous factors, making its assessment based on one value with one scale inappropriate. This paper presents a tool that measures wearable comfort across six dimensions: emotion, attachment, harm, perceived change, movement, and anxiety. The dimensions for these comfort rating scales were specifically developed for wearable equipment assessment by applying multidimensional scaling to a comfort term association matrix developed using the results of groupings of wearable computer comfort terms. Testing the scales on four different types of wearable computer showed that the scales can be used to highlight differences in comfort between different types of technology for different aspects of comfort. An intraclass correlation of .872 suggested that the scales were used with a high level of reliability. A second study showed that modifications made to a wearable computer resulted in improvements in comfort, although they were not significant (p > .05). A potential application for this research is as an aid to designers and researchers for assessing the wearability, in terms of comfort, of wearable computer devices and to determine the effectiveness of any modifications made to the design of a wearable device.
F-Score Driven Max Margin Neural Network for Named Entity Recognition in Chinese Social Media
We focus on named entity recognition (NER) for Chinese social media. With massive unlabeled text and quite limited labelled corpus, we propose a semisupervised learning model based on BLSTM neural network. To take advantage of traditional methods in NER such as CRF, we combine transition probability with deep learning in our model. To bridge the gap between label accuracy and F-score of NER, we construct a model which can be directly trained on F-score. When considering the instability of Fscore driven method and meaningful information provided by label accuracy, we propose an integrated method to train on both F-score and label accuracy. Our integrated model yields substantial improvement over previous state-of-the-art result.
Drosophila sex combs as a model of evolutionary innovations.
The diversity of animal and plant forms is shaped by nested evolutionary innovations. Understanding the genetic and molecular changes responsible for these innovations is therefore one of the key goals of evolutionary biology. From the genetic point of view, the origin of novel traits implies the origin of new regulatory pathways to control their development. To understand how these new pathways are assembled in the course of evolution, we need model systems that combine relatively recent innovations with a powerful set of genetic and molecular tools. One such model is provided by the Drosophila sex comb-a male-specific morphological structure that evolved in a relatively small lineage related to the model species D. melanogaster. Our extensive knowledge of sex comb development in D. melanogaster provides the basis for investigating the genetic changes responsible for sex comb origin and diversification. At the same time, sex combs can change on microevolutionary timescales and differ spectacularly among closely related species, providing opportunities for direct genetic analysis and for integrating developmental and population-genetic approaches. Sex comb evolution is associated with the origin of novel interactions between Hox and sex determination genes. Activity of the sex determination pathway was brought under the control of the Hox code to become segment-specific, while Hox gene expression became sexually dimorphic. At the same time, both Hox and sex determination genes were integrated into the intrasegmental spatial patterning network, and acquired new joint downstream targets. Phylogenetic analysis shows that similar sex comb morphologies evolved independently in different lineages. Convergent evolution at the phenotypic level reflects convergent changes in the expression of Hox and sex determination genes, involving both independent gains and losses of regulatory interactions. However, the downstream cell-differentiation programs have diverged between species, and in some lineages, similar adult morphologies are produced by different morphogenetic mechanisms. These features make the sex comb an excellent model for examining not only the genetic changes responsible for its evolution, but also the cellular processes that translate DNA sequence changes into morphological diversity. The origin and diversification of sex combs provides insights into the roles of modularity, cooption, and regulatory changes in evolutionary innovations, and can serve as a model for understanding the origin of the more drastic novelties that define higher order taxa.
Switched by input: Power efficient structure for RRAM-based convolutional neural network
Convolutional Neural Network (CNN) is a powerful technique widely used in computer vision area, which also demands much more computations and memory resources than traditional solutions. The emerging metal-oxide resistive random-access memory (RRAM) and RRAM crossbar have shown great potential on neuromorphic applications with high energy efficiency. However, the interfaces between analog RRAM crossbars and digital peripheral functions, namely Analog-to-Digital Converters (ADCs) and Digital-to-Analog Converters (DACs), consume most of the area and energy of RRAM-based CNN design due to the large amount of intermediate data in CNN. In this paper, we propose an energy efficient structure for RRAM-based CNN. Based on the analysis of data distribution, a quantization method is proposed to transfer the intermediate data into 1 bit and eliminate DACs. An energy efficient structure using input data as selection signals is proposed to reduce the ADC cost for merging results of multiple crossbars. The experimental results show that the proposed method and structure can save 80% area and more than 95% energy while maintaining the same or comparable classification accuracy of CNN on MNIST.
Use of platelet-rich fibrin membrane following treatment of gingival recession: a randomized clinical trial.
This 6-month randomized controlled clinical study primarily aimed to compare the results achieved by the use of a platelet-rich fibrin (PRF) membrane or connective tissue graft (CTG) in the treatment of gingival recession and to evaluate the clinical impact of PRF on early wound healing and subjective patient discomfort. Use of a PRF membrane in gingival recession treatment provided acceptable clinical results, followed by enhanced wound healing and decreased subjective patient discomfort compared to CTG-treated gingival recessions. No difference could be found between PRF and CTG procedures in gingival recession therapy, except for a greater gain in keratinized tissue width obtained in the CTG group and enhanced wound healing associated with the PRF group.
Bringing the cloud to the edge
Edge services become increasingly important as the Internet transforms into an Internet of Things (IoT). Edge services require bounded latency, bandwidth reduction between the edge and the core, service resiliency with graceful degradation, and access to resources visible only inside the NATed and secured edge networks. While the data center based cloud excels at providing general purpose computation/storage at scale, it is not suitable for edge services. We present a new model for cloud computing, which we call the Edge Cloud, that addresses edge computing specific issues by augmenting the traditional data center cloud model with service nodes placed at the network edges. We describe the architecture of the Edge Cloud and its implementation as an overlay hybrid cloud using the industry standard OpenStack cloud management framework. We demonstrate the advantages garnered by two new classes of applications enabled by the Edge Cloud - a highly accurate indoor localization that saves on latency, and a scalable and resilient video monitoring that saves on bandwidth.