title
stringlengths 8
300
| abstract
stringlengths 0
10k
|
---|---|
Automatic Exam Seating & Teacher Duty Allocation System | Examinations are the most crucial section of any educational system. They are intended to measure student's knowledge, skills and aptitude. At any institute, a great deal of manual effort is required to plan and arrange examination. It includes making seating arrangement for students as well as supervision duty chart for invigilators. Many institutes performs this task manually using excel sheets. This results in excessive wastage of time and manpower. Automating the entire system can help solve the stated problem efficiently saving a lot of time. This paper presents the automatic exam seating allocation. It works in two modules First as, Students Seating Arrangement (SSA) and second as, Supervision Duties Allocation (SDA). It assigns the classrooms and the duties to the teachers in any institution. An input-output data is obtained from the real system which is found out manually by the organizers who set up the seating arrangement and chalk out the supervision duties. The results obtained using the real system and these two models are compared. The application shows that the modules are highly efficient, low-cost, and can be widely used in various colleges and universities. |
A Novel Symmetric Double-Slot Structure for Antipodal Vivaldi Antenna to Lower Cross-Polarization Level | An antipodal Vivaldi antenna (AVA) with novel symmetric two-layer double-slot structure is proposed. When excited with equiamplitude and opposite phase, the two slots will have the sum vector of their E-field vectors parallel to the antenna’s plane, which is uniform to the E-field vector in the slot of a balanced AVA with three-layer structure. Compared with a typical AVA with the same size, the proposed antenna has better impedance characteristics because of the amelioration introduced by the coupling between the two slots, as well as the more symmetric radiation patterns and the remarkably lowered cross-polarization level at the endfire direction. For validating the analysis, an UWB balun based on the double-sided parallel stripline is designed for realizing the excitation, and a sample of the proposed antenna is fabricated. The measured results reveal that the proposed has an operating frequency range from 2.8 to 15 GHz, in which the cross-polarization level is less than −24.8 dB. Besides, the group delay of two face-to-face samples has a variation less than 0.62 ns, which exhibits the ability of the novel structure for transferring pulse signal with high fidelity. The simple two-layer structure, together with the improvement both in impedance and radiation characteristics, makes the proposed antenna much desirable for the UWB applications. |
Examining the Critical Success Factors of Mobile Website Adoption | Purpose – The purpose of this research is to examine the critical success factors of mobile web site adoption. Design/methodology/approach – Based on the valid responses collected from a questionnaire survey, the structural equation modelling technique was employed to examine the research model. Findings – The results indicate that system quality is the main factor affecting perceived ease of use, whereas information quality is the main factor affecting perceived usefulness. Service quality has significant effects on trust and perceived ease of use. Perceived usefulness, perceived ease of use and trust determine user satisfaction. Practical implications – Mobile service providers need to improve the system quality, information quality and service quality of mobile web sites to enhance user satisfaction. Originality/value – Previous research has mainly focused on e-commerce web site success and seldom examined the factors affecting mobile web site success. This research fills the gap. The research draws on information systems success theory, the technology acceptance model and trust theory as the theoretical bases. |
Individual differences in children's materialism: the role of peer relations. | Associations between materialism and peer relations are likely to exist in elementary school children but have not been studied previously. The first two studies introduce a new Perceived Peer Group Pressures (PPGP) Scale suitable for this age group, demonstrating that perceived pressure regarding peer culture (norms for behavioral, attitudinal, and material characteristics) can be reliably measured and that it is connected to children's responses to hypothetical peer pressure vignettes. Studies 3 and 4 evaluate the main theoretical model of associations between peer relations and materialism. Study 3 supports the hypothesis that peer rejection is related to higher perceived peer culture pressure, which in turn is associated with greater materialism. Study 4 confirms that the endorsement of social motives for materialism mediates the relationship between perceived peer pressure and materialism. |
A graph-theory algorithm for rapid protein side-chain prediction. | Fast and accurate side-chain conformation prediction is important for homology modeling, ab initio protein structure prediction, and protein design applications. Many methods have been presented, although only a few computer programs are publicly available. The SCWRL program is one such method and is widely used because of its speed, accuracy, and ease of use. A new algorithm for SCWRL is presented that uses results from graph theory to solve the combinatorial problem encountered in the side-chain prediction problem. In this method, side chains are represented as vertices in an undirected graph. Any two residues that have rotamers with nonzero interaction energies are considered to have an edge in the graph. The resulting graph can be partitioned into connected subgraphs with no edges between them. These subgraphs can in turn be broken into biconnected components, which are graphs that cannot be disconnected by removal of a single vertex. The combinatorial problem is reduced to finding the minimum energy of these small biconnected components and combining the results to identify the global minimum energy conformation. This algorithm is able to complete predictions on a set of 180 proteins with 34342 side chains in <7 min of computer time. The total chi(1) and chi(1 + 2) dihedral angle accuracies are 82.6% and 73.7% using a simple energy function based on the backbone-dependent rotamer library and a linear repulsive steric energy. The new algorithm will allow for use of SCWRL in more demanding applications such as sequence design and ab initio structure prediction, as well addition of a more complex energy function and conformational flexibility, leading to increased accuracy. |
Distinctive abnormalities of motor axonal strength-duration properties in multifocal motor neuropathy and in motor neurone disease. | The strength-duration function is a classic measure of neural excitability. When studied on peripheral motor axons it reflects the intrinsic nodal membrane properties, and its time-constant (tau(SD) or chronaxie) predominantly depends on non-voltage-gated, rest Na(+) inward conductances. We assessed the strength-duration curve of ulnar motor axons in 22 nerves of healthy controls, in 18 nerves of patients with multifocal motor neuropathy with conduction blocks (MMN), and in 19 nerves of patients with motor neurone disease (MND). The compound muscle action potential (CMAP) was smaller in nerves of both groups of patients than in controls (P < 0.05). The rheobasic current (rh(50%)) [mean +/- standard deviation (SD)] was higher in patients with MMN than in controls (13.3 +/- 16.3 mA; controls 4.7 +/- 1.7 mA, P < 0.05). The tau(SD) was differentially abnormal in the nerves of the two groups of patients: it was prolonged in the nerves of patients with MND for >or=40 years (227.2 +/- 34.5 micro s; controls 190.9 +/- 51.0 micro s, P < 0.05), but it was shortened in the nerves of patients with MMN (146.5 +/- 55.4 micro s; controls 208.6 +/- 51.2 micro s, P < 0.05) who had not been treated recently with high-dose intravenous immunoglobulin (IVIg). Nerves of patients with recently treated MMN (<6 weeks) who were under the therapeutic effect of IVIg had a normal tau(SD)(.) Our results suggest that, probably due to an immuno-mediated rest Na(+) channel dysfunction, Na(+) conductances are reduced in MMN. This abnormality is a function of the time after the last IVIg treatment and involves also the axonal membrane outside the conduction block. Conversely, in MND, possibly owing to the ionic leakage of degenerating membrane, rest Na(+) conductances are increased. Measuring the strength-duration curve of the ulnar motor axons might be useful in the differential diagnosis between de novo MMN and MND. |
Efficacy of fluticasone propionate/formoterol fumarate in the treatment of asthma: a pooled analysis. | BACKGROUND
Fluticasone propionate and formoterol fumarate have been combined in a single inhaler (fluticasone/formoterol; flutiform(®)) for the maintenance treatment of asthma. This pooled analysis assessed the efficacy of fluticasone/formoterol versus fluticasone in patients who previously received inhaled corticosteroids.
METHODS
Data were pooled from five randomised studies in patients with asthma (aged ≥12 years) treated for 8 or 12 weeks with fluticasone/formoterol (100/10, 250/10 or 500/20 μg b.i.d.; n = 528 delivered via pMDI) or fluticasone alone (100, 250 or 500 μg b.i.d.; n = 527).
RESULTS
Fluticasone/formoterol provided significantly greater increases than fluticasone alone in mean morning forced expiratory volume in 1 second (FEV1) from pre-dose at baseline to 2 hours post-dose at study end (least-squares mean [LSM] treatment difference: 0.146L; p < 0.001) and in pre-dose FEV1 from baseline to study end (LSM treatment difference: 0.048 L; p = 0.043). Compared with fluticasone, fluticasone/formoterol provided greater increases in the percentage of asthma control days (no symptoms, no rescue medication use and no sleep disturbance due to asthma) from baseline to study end (LSM treatment difference: 8.6%; p < 0.001), and was associated with a lower annualised rate of exacerbations (rate ratio: 0.71; p = 0.014).
CONCLUSIONS
In summary, fluticasone/formoterol provides clinically significant improvements in lung function and asthma control measures, with a lower incidence of exacerbations than fluticasone alone. |
Psychiatric morbidity after screening for breast cancer. | One hundred and thirty two women with normal breast screening results were interviewed six months after their attendance at the Edinburgh Breast Screening Clinic. Eight percent of women said screening had made them more anxious about developing breast cancer. Thirty eight percent said they were more aware of the disease since screening but they regarded this as advantageous. Seventy percent of the women were still practising breast self-examination. There was no difference in the psychiatric morbidity of the screened sample when compared with a matched random sample community control group. Neither was there any difference in the General Health Questionnaire case rates before and after screening. Screening does not appear to increase the prevalence of psychiatric morbidity. Twenty nine percent of the interview sample were examining their breasts more than once a month--21% once a week or more. However, these frequent self-examiners did not have a greater prevalence of psychiatric morbidity than their matched controls. |
Steerable interfaces for pervasive computing spaces | This paper introduces a new class of interactive interfaces that can be moved around to appear on ordinary objects and surfaces anywhere in a space. By dynamically adapting the form, function, and location of an interface to suit the context of the user, such steerable interfaces have the potential to offer radically new and powerful styles of interaction in intelligent pervasive computing spaces. We propose defining characteristics of steerable interfaces and present the first steerable interface system that combines projection, gesture recognition, user tracking, environment modeling and geometric reasoning components within a system architecture. Our work suggests that there is great promise and rich potential for further research on steerable interfaces. |
Comparison of Approximate Methods for Handling Hyperparameters | I examine two approximate methods for computational implementation of Bayesian hierarchical models, that is, models that include unknown hyperparameters such as regularization constants and noise levels. In the evidence framework, the model parameters are integrated over, and the resulting evidence is maximized over the hyperparameters. The optimized hyperparameters are used to define a gaussian approximation to the posterior distribution. In the alternative MAP method, the true posterior probability is found by integrating over the hyperparameters. The true posterior is then maximized over the model parameters, and a gaussian approximation is made. The similarities of the two approaches and their relative merits are discussed, and comparisons are made with the ideal hierarchical Bayesian solution. In moderately ill-posed problems, integration over hyperparameters yields a probability distribution with a skew peak, which causes signifi-cant biases to arise in the MAP method. In contrast, the evidence framework is shown to introduce negligible predictive error under straightforward conditions. General lessons are drawn concerning inference in many dimensions. |
CGRP and migraine: Could PACAP play a role too? | Migraine is a debilitating neurological disorder that affects about 12% of the population. In the past decade, the role of the neuropeptide calcitonin gene-related peptide (CGRP) in migraine has been firmly established by clinical studies. CGRP administration can trigger migraines, and CGRP receptor antagonists ameliorate migraine. In this review, we will describe multifunctional activities of CGRP that could potentially contribute to migraine. These include roles in light aversion, neurogenic inflammation, peripheral and central sensitization of nociceptive pathways, cortical spreading depression, and regulation of nitric oxide production. Yet clearly there will be many other contributing genes that could act in concert with CGRP. One candidate is pituitary adenylate cyclase-activating peptide (PACAP), which shares some of the same actions as CGRP, including the ability to induce migraine in migraineurs and light aversive behavior in rodents. Interestingly, both CGRP and PACAP act on receptors that share an accessory subunit called receptor activity modifying protein-1 (RAMP1). Thus, comparisons between the actions of these two migraine-inducing neuropeptides, CGRP and PACAP, may provide new insights into migraine pathophysiology. |
The Tree-to-Tree Correction Problem | The tree-to-tree correctmn problem Is to determine, for two labeled ordered trees T and T', the distance from T to T' as measured by the mlmmum cost sequence of edit operaUons needed to transform T into T' The edit operations investigated allow changing one node of a tree into another node, deleting one node from a tree, or inserting a node into a tree An algorithm Is presented which solves this problem m time O(V* V'*LZ* L'2), where V and V' are the numbers of nodes respectively of T and T', and L and L' are the maximum depths respectively of T and T' Possible apphcatmns are to the problems of measuring the similarity between trees, automatic error recovery and correction for programming languages, and determining the largest common substructure of two trees |
Returning mail-order goods: analyzing the relationship between the rate of returns and the associated costs | Abstract Mail-ordering, particularly on the internet, has continually grown in importance over the last few years. This trend is expected to continue with no apparent end in sight. Liberal return policies have significantly contributed to this development by strengthening trust in both the individual retailers and the sales channel in general, but they do come at a price. This article is the first to systematically analyze the relation between the rate of returns and the associated costs. A circular model for the sales and returns process reveals a disproportionate relation between the two, which is further amplified once depreciation is considered. The model may serve decision-makers as an easy-to-use tool to systematically evaluate preventive returns management measures such as avoidance and gatekeeping. |
GISSI trials in acute myocardial infarction. Rationale, design, and results. | The first Gruppo Italiano per lo Studio della Streptochinasi nell'Infarto (GISSI) study showed striking evidence of the effectiveness and safety of intravenous thrombolytic treatment in acute myocardial infarction (MI). Since publication in The Lancet, the original report has become a reference work for every paper which deals with thrombolysis. In addition to GISSI's scientific value, these studies applied formal research to routine clinical practice outside of referral centers. Nearly all Italian CCUs took part in the GISSI projects, so that the results provide a profile of the patient who seeks care for acute MI in Italy. This wide data base allowed GISSI investigators to look into some relevant clinical events, eg, primary ventricular fibrillation, stroke, and in-hospital reinfarction. The GISSI-2 trial followed the GISSI-1 philosophy. The package of treatments recommended after extensive discussion with all the investigators (beta-blocker, aspirin, nitrates) was widely adopted. Now, only five years after the start of the GISSI-1, the overall mortality of Italian patients with acute MI has decreased from 13.0 percent to about 9 percent, and the number of patients with acute MI arriving in hospital within 1 h of the onset of symptoms has increased 50 percent. It is the wish of the GISSI investigators that this approach to treating acute MI will be regarded and acknowledged as their major contribution to the problem. |
Semi-supervised Learning with Regularized Laplacian | We study a semi-supervised learning method based on the similarity graph and Regularized Laplacian. We give convenient optimization formulation of the Regularized Laplacian method and establish its various properties. In particular, we show that the kernel of the method can be interpreted in terms of discrete and continuous time random walks and possesses several important properties of proximity measures. Both optimization and linear algebra methods can be used for efficient computation of the classification functions. We demonstrate on numerical examples that the Regularized Laplacian method is competitive with respect to the other state of the art semi-supervised learning methods. Key-words: Semi-supervised learning, Graph-based learning, Regularized Laplacian, Proximity measure, Wikipedia article classification ∗ Corresponding author. K. Avrachenkov is with Inria Sophia Antipolis, 2004 Route des Lucioles, 06902, Sophia Antipolis, France [email protected] † P. Chebotarev is with Trapeznikov Institute of Control Sciences of the Russian Academy of Sciences, 65 Profsoyuznaya Str., Moscow, 117997, Russia ‡ A. Mishenin is with St. Petersburg State University, Faculty of Applied Mathematics and Control Processes, Peterhof, 198504, Russia § This work was partially supported by Campus France, Alcatel-Lucent Inria Joint Lab, EU Project Congas FP7-ICT-2011-8-317672, and RFBR grant No. 13-07-00990. L’Apprentissage Semi-supervisé avec Laplacian Régularisé Résumé : Nous étudions une méthode d’apprentissage semi-supervisé, basé sur le graphe de similarité et Laplacian régularisé. Nous formalisons la méthode comme un problème d’optimisation convexe et quadratique et nous établissons ses diverses propriétés. En particulier, nous montrons que le noyau de la méthode peut être interprété en termes des marches aléatoires en temps discret et continu et possède plusieurs propriétés importantes des mesures de proximité. Les techniques d’optimisation ainsi que les techniques d’algébre linéaire peuvent être utilisé pour un calcul efficace des fonctions de classification. Nous démontrons sur des exemples numériques que la méthode de Laplacian régularisé est concurrentiel par rapport aux autres état de l’art méthodes d’apprentissage semi-supervisé. Mots-clés : Apprentissage Semi-supervisé, Apprentissage basé sur le graphe de similarité, Laplacian régularisé, mesure de proximité, classification des articles Wikipedia Semi-supervised Learning with Regularized Laplacian 3 |
Haskell 98: Declarations and Bindings | 4.1 Overview of Types and Classes 404.2 User-Defined datatypes 454.3 Type Classes and Overloading 494.4 Nested Declarations 554.5 Static Semantics of Function and Pattern Bindings 604.6 Kind Inference 66 |
A Comparative Study of Energy Minimization Methods for Markov Random Fields with Smoothness-Based Priors | Among the most exciting advances in early vision has been the development of efficient energy minimization algorithms for pixel-labeling tasks such as depth or texture computation. It has been known for decades that such problems can be elegantly expressed as Markov random fields, yet the resulting energy minimization problems have been widely viewed as intractable. Algorithms such as graph cuts and loopy belief propagation (LBP) have proven to be very powerful: For example, such methods form the basis for almost all the top-performing stereo methods. However, the trade-offs among different energy minimization algorithms are still not well understood. In this paper, we describe a set of energy minimization benchmarks and use them to compare the solution quality and runtime of several common energy minimization algorithms. We investigate three promising methods-graph cuts, LBP, and tree-reweighted message passing-in addition to the well-known older iterated conditional mode (ICM) algorithm. Our benchmark problems are drawn from published energy functions used for stereo, image stitching, interactive segmentation, and denoising. We also provide a general-purpose software interface that allows vision researchers to easily switch between optimization methods. The benchmarks, code, images, and results are available at http://vision.middlebury.edu/MRF/. |
MORPH: a longitudinal image database of normal adult age-progression | This paper details MORPH a longitudinal face database developed for researchers investigating all facets of adult age-progression, e.g. face modeling, photo-realistic animation, face recognition, etc. This database contributes to several active research areas, most notably face recognition, by providing: the largest set of publicly available longitudinal images; longitudinal spans from a few months to over twenty years; and, the inclusion of key physical parameters that affect aging appearance. The direct contribution of this data corpus for face recognition is highlighted in the evaluation of a standard face recognition algorithm, which illustrates the impact that age-progression, has on recognition rates. Assessment of the efficacy of this algorithm is evaluated against the variables of gender and racial origin. This work further concludes that the problem of age-progression on face recognition (FR) is not unique to the algorithm used in this work |
Improved inception-residual convolutional neural network for object recognition | Machine learning and computer vision have driven many of the greatest advances in the modeling of Deep Convolutional Neural Networks (DCNNs). Nowadays, most of the research has been focused on improving recognition accuracy with better DCNN models and learning approaches. The recurrent convolutional approach is not applied very much, other than in a few DCNN architectures. On the other hand, Inception-v4 and Residual networks have promptly become popular among computer the vision community. In this paper, we introduce a new DCNN model called the Inception Recurrent Residual Convolutional Neural Network (IRRCNN), which utilizes the power of the Recurrent Convolutional Neural Network (RCNN), the Inception network, and the Residual network. This approach improves the recognition accuracy of the Inception-residual network with same number of network parameters. In addition, this proposed architecture generalizes the Inception network, the RCNN, and the Residual network with significantly improved training accuracy. We have empirically evaluated the performance of the IRRCNN model on different benchmarks including CIFAR-10, CIFAR-100, TinyImageNet-200, and CU3D-100. The experimental results show higher recognition accuracy against most of the popular DCNN models including the RCNN. We have also investigated the performance of the IRRCNN approach against the Equivalent Inception Network (EIN) and the Equivalent Inception Residual Network (EIRN) counterpart on the CIFAR-100 dataset. We report around 4.53, 4.49 and 3.56% improvement in classification accuracy compared with the RCNN, EIN, and EIRN on the CIFAR-100 dataset respectively. Furthermore, the experiment has been conducted on the TinyImageNet-200 and CU3D-100 datasets where the IRRCNN provides better testing accuracy compared to the Inception Recurrent CNN, the EIN, the EIRN, Inception-v3, and Wide Residual Networks. |
A Mnemonic for Pharmacists to Ensure Optimal Monitoring and Safety of Total Parenteral Nutrition: I AM FULL. | OBJECTIVE
To present a guideline-derived mnemonic that provides a systematic monitoring process to increase pharmacists' confidence in total parenteral nutrition (TPN) monitoring and improve safety and efficacy of TPN use.
DATA SOURCES
The American Society for Parenteral and Enteral Nutrition (ASPEN) guidelines were reviewed. Additional resources included a literature search of PubMed (1980 to May 2016) using the search terms: total parenteral nutrition, mnemonic, indications, allergy, macronutrients, micronutrients, fluid, comorbidities, labs, peripheral line, and central line. Articles (English-language only) were evaluated for content, and additional references were identified from a review of literature citations.
STUDY SELECTION AND DATA EXTRACTION
All English-language observational studies, review articles, meta-analyses, guidelines, and randomized trials assessing monitoring parameters of TPN were evaluated.
DATA SYNTHESIS
The ASPEN guidelines were referenced to develop key components of the mnemonic. Review articles, observational trials, meta-analyses, and randomized trials were reviewed in cases where guidelines did not adequately address these components.
CONCLUSIONS
A guideline-derived mnemonic was developed to systematically and safely manage TPN therapy. The mnemonic combines 7 essential components of TPN use and monitoring: Indications, Allergies, Macro/Micro nutrients, Fluid, Underlying comorbidities, Labs, and Line type. |
The maternal brain and its plasticity in humans | This article is part of a Special Issue "Parental Care". Early mother-infant relationships play important roles in infants' optimal development. New mothers undergo neurobiological changes that support developing mother-infant relationships regardless of great individual differences in those relationships. In this article, we review the neural plasticity in human mothers' brains based on functional magnetic resonance imaging (fMRI) studies. First, we review the neural circuits that are involved in establishing and maintaining mother-infant relationships. Second, we discuss early postpartum factors (e.g., birth and feeding methods, hormones, and parental sensitivity) that are associated with individual differences in maternal brain neuroplasticity. Third, we discuss abnormal changes in the maternal brain related to psychopathology (i.e., postpartum depression, posttraumatic stress disorder, substance abuse) and potential brain remodeling associated with interventions. Last, we highlight potentially important future research directions to better understand normative changes in the maternal brain and risks for abnormal changes that may disrupt early mother-infant relationships. |
Arterial stiffness in hypertensives in relation to expression of angiopoietin-1 and 2 genes in peripheral monocytes | Angiopoietins (Angs) are important angiogenic and endothelial cell growth factors with many functions, including influence on the vascular wall. Pulse-wave velocity (pwv) is an independent marker of cardiovascular adverse outcome in hypertensives, although all the pathophysiological mechanisms that affect it have not yet been determined. We investigated the relationship between arterial stiffness and Ang-1 and Ang-2 gene expression in the peripheral blood monocytes of hypertensive patients. We studied 53 patients who had untreated grade-1 or grade-2 essential hypertension and no indications of other organic heart disease. Carotid–femoral (c–f) and carotid–radial (c–r) artery waveforms were measured and pwv was determined. The monocytes were isolated using anti-CD14+ antibodies and mRNAs were estimated by real-time quantitative reverse transcription–PCR. Ang-1 gene expression was strongly correlated with both c–f-pwv (r=0.952, P<0.001) and c–r-pwv (r=0.898, P<0.001). Similarly, Ang-2 gene expression was significantly correlated with both c–f-pwv (r=0.471, P=0.002) and c–r-pwv (r=0.437, P=0.003). Our data provide important evidence that Ang-1 and Ang-2 gene expression levels in peripheral monocytes are closely related with pwv in patients with essential hypertension. This positive correlation may suggest a link between angiogenesis and arterial stiffness in those patients. |
Increased bone formation in osteocalcin-deficient mice | VERTEBRATES constantly remodel bone. The resorption of preexisting bone by osteoclasts and the formation of new bone by osteoblasts is strictly coordinated to maintain bone mass within defined limits. A few molecular determinants of bone remodelling that affect osteoclast activity1–3 have been characterized, but the molecular determinants of osteoblast activity are unknown. To investigate the role of osteocalcin, the most abundant osteoblast-specific non-collagenous protein4, we have generated osteocalcin-deficient mice. These mice develop a phenotype marked by higher bone mass and bones of improved functional quality. Histomorphometric studies done before and after ovariectomy showed that the absence of osteocalcin leads to an increase in bone formation without impairing bone resorption. To our knowledge, this study provides the first evidence that osteocalcin is a determinant of bone formation. |
Guidelines are only half of the story: accessibility problems encountered by blind users on the web | This paper describes an empirical study of the problems encountered by 32 blind users on the Web. Task-based user evaluations were undertaken on 16 websites, yielding 1383 instances of user problems. The results showed that only 50.4% of the problems encountered by users were covered by Success Criteria in the Web Content Accessibility Guidelines 2.0 (WCAG 2.0). For user problems that were covered by WCAG 2.0, 16.7% of websites implemented techniques recommended in WCAG 2.0 but the techniques did not solve the problems. These results show that few developers are implementing the current version of WCAG, and even when the guidelines are implemented on websites there is little indication that people with disabilities will encounter fewer problems. The paper closes by discussing the implications of this study for future research and practice. In particular, it discusses the need to move away from a problem-based approach towards a design principle approach for web accessibility. |
Financing renewable energy : who is financing what and why it matters ∗ | Accelerating innovation in renewable energy (RE) requires not just more finance, but finance servicing the entire innovation landscape. Given that finance is not ‘neutral’, more information is required on the quality of finance that meets technology and innovation stage-specific financing needs for the commercialization of RE technologies. We investigate the relationship between different financial actors with investment in different RE technologies. We construct a new deal-level dataset of global RE asset finance from 2004 to 2014 based on Bloomberg New Energy Finance data, that distinguishes 10 investor types (e.g. private banks, public banks, utilities) and 11 RE technologies into which they invest. We also construct a heuristic investment risk measure that varies with technology, time and country of investment. We find that particular investor types have preferences for particular risk levels, and hence particular types of RE. Some investor types invested into far riskier portfolios than others, and financing of individual high-risk technologies depended on investment by specific investor types. After the 2008 financial crisis, state-owned or controlled companies and banks emerged as the high-risk taking locomotives of RE asset finance. We use these preliminary results to formulate new questions for future RE policy, and encourage further research. |
Prediction of flank wear by using back propagation neural network modeling when cutting hardened H-13 steel with chamfered and honed CBN tools | Productivity and quality in the finish turning of hardened steels can be improved by utilizing predicted performance of the cutting tools. This paper combines predictive machining approach with neural network modeling of tool flank wear in order to estimate performance of chamfered and honed Cubic Boron Nitride (CBN) tools for a variety of cutting conditions. Experimental work has been performed in orthogonal cutting of hardened H-13 type tool steel using CBN tools. At the selected cutting conditions the forces have been measured using a piezoelectric dynamometer and data acquisition system. Simultaneously flank wear at the cutting edge has been monitored by using a tool makers microscope. The experimental force and wear data were utilized to train the developed simulation environment based on back propagation neural network modeling. A trained neural network system was used in predicting flank wear for various different cutting conditions. The developed prediction system was found to be capable of accurate tool wear classification for the range it had been trained. 2001 Elsevier Science Ltd. All rights reserved. |
Are the Drivers and Role of Online Trust the Same for All Web Sites and Consumers ? A Large-Scale Exploratory Empirical Study | The authors develop a conceptual model that links Web site and consumer characteristics, online trust, and behavioral intent. They estimate this model on data from 6831 consumers across 25 sites from eight Web site categories, using structural equation analysis with a priori and post hoc segmentation. The results show that the influences of the determinants of online trust are different across site categories and consumers. Privacy and order fulfillment are the most influential determinants of trust for sites in which both information risk and involvement are high, such as travel sites. Navigation is strongest for information-intensive sites, such as sports, portal, and community sites. Brand strength is critical for high-involvement categories, such as automobile and financial services sites. Online trust partially mediates the relationships between Web site and consumer characteristics and behavioral intent, and this mediation is strongest (weakest) for sites oriented toward infrequently (frequently) purchased, highinvolvement items, such as computers (financial services). |
A Note on the Asymptotic Probabilities of Existential Second-Order Minimal Gödel Sentences with Equality | The minimal Godel class is the class of first-order prenex sentences whose quantifier prefix consists of two universal quantifiers followed by just one existential quantifier. We prove that asymptotic probabilities of existential second-order sentences, whose first-order part is in the minimal Godel class, form a dense subset of the unit interval. |
Learning in Mean Field Games: the Fictitious Play | Mean Field Game systems describe equilibrium configurations in differential games with infinitely many infinitesimal interacting agents. We introduce a learning procedure (similar to the Fictitious Play) for these games and show its convergence when the Mean Field Game is potential. |
Pythagorean Membership Grades in Multicriteria Decision Making | We first look at some nonstandard fuzzy sets, intuitionistic, and interval-valued fuzzy sets. We note both these allow a degree of commitment of less then one in assigning membership. We look at the formulation of the negation for these sets and show its expression in terms of the standard complement with respect to the degree of commitment. We then consider the complement operation. We describe its properties and look at alternative definitions of complement operations. We then focus on the Pythagorean complement. Using this complement, we introduce a class of nonstandard Pythagorean fuzzy subsets whose membership grades are pairs, (a, b) satisfying the requirement a2 + b2 ≤ 1. We introduce a variety of aggregation operations for these Pythagorean fuzzy subsets. We then look at multicriteria decision making in the case where the criteria satisfaction are expressed using Pythagorean membership grades. The issue of having to choose a best alternative in multicriteria decision making leads us to consider the problem of comparing Pythagorean membership grades. |
Effect of constraint-induced therapy on upper limb functions: a randomized control trial. | AIMS
Children with congenital hemiparesis have unilateral upper extremity involvement, limiting their ability in unilateral or bilateral manual tasks, thus negatively influencing their participation in daily activities. Constraint-induced movement therapy (CIMT) has been shown to be promising for improving upper-limb functions in children with cerebral palsy. Clinical assessments may be needed to quantify and qualify changes in children's performance following its application.
METHODS
This study investigated the effectiveness of a child-friendly form of CIMT to improve upper extremity functional performance. Thirty congenitally hemiparetic children aged 4-8 years were randomly assigned to receive either a CIMT program (study group) or a conventional non-structured therapy program (control group). The programs were applied for both groups for six hours daily, five days weekly for four successive weeks. The Pediatric Arm Function Test, Quality of Upper Extremity Skills Test, and isokinetic muscular performances of shoulder flexors, extensors, and abductors expressed as peak torque were used to evaluate immediate and long-lasting efficacy of CIMT.
RESULTS
The results showed improvement in the involved upper extremity performances in different evaluated tasks immediately post-CIMT program application compared with the control group. These improvements continued three months later.
CONCLUSION
Pediatric CIMT with shaping produced considerable and sustained improvement in the involved upper extremity movements and functions in children with congenital hemiparesis. |
Sex, clinical presentation, and outcome in patients with acute coronary syndromes. Global Use of Strategies to Open Occluded Coronary Arteries in Acute Coronary Syndromes IIb Investigators. | BACKGROUND
Studies have reported that women with acute myocardial infarction have in-hospital and long-term outcomes that are worse than those of men.
METHODS
To assess sex-based differences in presentation and outcome, we examined data from the Global Use of Strategies to Open Occluded Coronary Arteries in Acute Coronary Syndromes IIb study, which enrolled 12,142 patients (3662 women and 8480 men) with acute coronary syndromes, including infarction with ST-segment elevation, infarction with no ST-segment elevation, and unstable angina.
RESULTS
Overall, the women were older than the men, and had significantly higher rates of diabetes, hypertension, and prior congestive heart failure. They had significantly lower rates of prior myocardial infarction and were less likely ever to have smoked. A smaller percentage of women than men had infarction with ST elevation (27.2 percent vs. 37.0 percent, P<0.001), and of the patients who presented with no ST elevation (those with myocardial infarction or unstable angina), fewer women than men had myocardial infarction (36.6 percent vs. 47.6 percent, P<0.001). Women had more complications than men during hospitalization and a higher mortality rate at 30 days (6.0 percent vs. 4.0 percent, P<0.001) but had similar rates of reinfarction at 30 days after presentation. However, there was a significant interaction between sex and the type of coronary syndrome at presentation (P=0.001). After stratification according to coronary syndrome and adjustment for base-line variables, there was a nonsignificant trend toward an increased risk of death or reinfarction among women as compared with men only in the group with infarction and ST elevation (odds ratio, 1.27; 95 percent confidence interval, 0.98 to 1.63; P=0.07). Among patients with unstable angina, female sex was associated with an independent protective effect (odds ratio for infarction or death, 0.65; 95 percent confidence interval, 0.49 to 0.87; P=0.003).
CONCLUSIONS
Women and men with acute coronary syndromes had different clinical profiles, presentation, and outcomes. These differences could not be entirely accounted for by differences in base-line characteristics and may reflect pathophysiologic and anatomical differences between men and women. |
Reproducible Experiments for Comparing Apache Flink and Apache Spark on Public Clouds | Big data processing is a hot topic in today’s computer science world. There is a significant demand for analysing big data to satisfy many requirements of many industries. Emergence of the Kappa architecture created a strong requirement for a highly capable and efficient data processing engine. Therefore data processing engines such as Apache Flink and Apache Spark emerged in open source world to fulfill that efficient and high performing data processing requirement. There are many available benchmarks to evaluate those two data processing engines. But complex deployment patterns and dependencies make those benchmarks very difficult to reproduce by our own. This project has two main goals. They are making few of community accepted benchmarks easily reproducible on cloud and validate the performance claimed by those studies. Keywords– Data Processing, Apache Flink, Apache Spark, Batch processing, Stream processing, Reproducible experiments, Cloud |
Torque and slip behavior of single-phase induction motors driven from variable frequency supplies | Adjustable-frequency drives have not been widely used with single-phase induction motors. Computations show that, unlike the three-phase induction motor, the single-phase induction motor's slip is not constant with changes in frequency at a constant load torque. A constant volts-per-hertz law is found to provide nearly rated torque over a portion of the upper speed range, but the maximum available torque decays rapidly below about 50% of the base frequency. The behavior of the single-phase induction motor under variable-frequency operation is studied, providing insights to possible scalar control laws for optimizing performance at all speeds. Several possible open-loop control strategies are examined using computer simulations on a 0.5-hp single-phase induction motor. Experimental results show excellent agreement with the analysis and simulation. These experiments provide proof that an adjustable-frequency power supply can be used for speed control of the single-phase induction motor if the motor's unique operating characteristics are accounted for.<<ETX>> |
Automatic Music Genre Classification Based on Modulation Spectral Analysis of Spectral and Cepstral Features | In this paper, we will propose an automatic music genre classification approach based on long-term modulation spectral analysis of spectral (OSC and MPEG-7 NASE) as well as cepstral (MFCC) features. Modulation spectral analysis of every feature value will generate a corresponding modulation spectrum and all the modulation spectra can be collected to form a modulation spectrogram which exhibits the time-varying or rhythmic information of music signals. Each modulation spectrum is then decomposed into several logarithmically-spaced modulation subbands. The modulation spectral contrast (MSC) and modulation spectral valley (MSV) are then computed from each modulation subband. Effective and compact features are generated from statistical aggregations of the MSCs and MSVs of all modulation subbands. An information fusion approach which integrates both feature level fusion method and decision level combination method is employed to improve the classification accuracy. Experiments conducted on two different music datasets have shown that our proposed approach can achieve higher classification accuracy than other approaches with the same experimental setup. |
Second Life in higher education: Assessing the potential for and the barriers to deploying virtual worlds in learning and teaching | Second Life (SL) is currently the most mature and popular multi-user virtual world platform being used in education. Through an in-depth examination of SL, this article explores its potential and the barriers that multi-user virtual environments present to educators wanting to use immersive 3-D spaces in their teaching. The context is set by tracing the history of virtual worlds back to early multi-user online computer gaming environments and describing the current trends in the development of 3-D immersive spaces. A typology for virtual worlds is developed and the key features that have made unstructured 3-D spaces so attractive to educators are described. The popularity in use of SL is examined through three critical components of the virtual environment experience: technical, immersive and social. From here, the paper discusses the affordances that SL offers for educational activities and the types of teaching approaches that are being explored by institutions. The work concludes with a critical analysis of the barriers to successful implementation of SL as an educational tool and maps a number of developments that are underway to address these issues across virtual worlds more broadly. Introduction The story of virtual worlds is one that cannot be separated from technological change. As we witness increasing maturity and convergence in broadband, wireless computing, British Journal of Educational Technology Vol 40 No 3 2009 414–426 doi:10.1111/j.1467-8535.2009.00952.x © 2009 The Author. Journal compilation © 2009 Becta. Published by Blackwell Publishing, 9600 Garsington Road, Oxford OX4 2DQ, UK and 350 Main Street, Malden, MA 02148, USA. video and audio technologies, we see virtual immersive environments becoming more practical and useable. In this article, I review the present socio-technical environment of virtual worlds, and draw on an analysis of Second Life (SL) to outline the potential for and the barriers to successful implementation of 3-D immersive spaces in education. Virtual worlds have existed in some form since the early 1980s, but their absolute definition remains contested. This reflects the general nature of a term that draws on multiple writings of the virtual and the difficulties in attempting to fix descriptions in an area that is undergoing persistent technological development. The numerous contextual descriptions that have appeared, from the perspectives of writers, academics, industry professionals and the media, have further complicated agreement on a common understanding of virtual worlds. Bell (2008) has approached this problem by suggesting a combined definition based on the work of Bartle (2004), Castronova (2004) and Koster (2004), drawing the work together using key terms that relate to: synchronicity, persistence, network of people, avatar representation and facilitation of the experience by networked computers. But perhaps the most satisfying and simplest insight comes from Schroeder (1996, 2008) who has consistently argued that virtual environments and virtual reality technologies should be defined as: A computer-generated display that allows or compels the user (or users) to have a sense of being present in an environment other than the one they are actually in, and to interact with that environment (Schroeder, 1996, p. 25) In other words, a virtual world provides an experience set within a technological environment that gives the user a strong sense of being there. The multi-user virtual environments (MUVEs) of today share common features that reflect their roots in the gaming worlds of multi-user dungeons and massively multiplayer online games (MMOs), made more popular in recent times through titles such as NeverWinter Nights and World of Warcraft, both based on the Dungeons and Dragons genre of role-playing game. Virtual worlds may appear in different forms yet they possess a number of recurrent features that include: • persistence of the in-world environment • a shared space allowing multiple users to participate simultaneously • virtual embodiment in the form of an avatar (a personisable 3-D representation of the self) • interactions that occur between users and objects in a 3-D environment • an immediacy of action such that interactions occur in real time • similarities to the real world such as topography, movement and physics that provide the illusion of being there. (Smart, Cascio & Paffendof, 2007) These are features compelling enough to attract more than 300 million registered users to spend part of their time within commercial social and gaming virtual worlds (Hays, 2008). Second Life in higher education 415 © 2009 The Author. Journal compilation © 2009 Becta. From MMOs and MUVEs to SL What marks a significant difference between MUVEs and MMOs is the lack of a predetermined narrative or plot-driven storyline. In the worlds exemplified by SL, there is no natural purpose unless one is created or built. Here, social interaction exists not as a precursor to goal-oriented action but rather, it occurs within an open-ended system that offers a number of freedoms to the player, such as: the creation and ownership of objects; the creation of interpersonal networks; and monetary transactions that occur within a tangible economic structure (Castronova, 2004; Ludlow & Wallace, 2007). It is primarily this open-endedness, combined with the ability to create content and shape the virtual environment in an almost infinite number of ways, which has attracted educators to the possibilities afforded by immersive 3-D spaces. A typology of virtual worlds Within the broad panorama of virtual environments, we can find offerings from both open source projects and proprietary vendors. These include the worlds of OpenSim, Croquet Consortium, ActiveWorlds, Project Wonderland, There, Olive and Twinity. We can identify a number of approaches to platform development and delivery each defined by their perceived target audience. For example, Olive specifically markets itself towards large institutions and enterprise-level productivity. MUVEs, therefore, can be categorised in a number of ways. In the typology shown in Table 1, a number of extant 3-D virtual worlds are grouped by their narrative approach and 3-D representational system. There are several alternative categorisations that have been suggested. Messinger, Stroulia and Lyons (2008) builds their typology on Porter’s (2004) original typology of virtual communities where the five key elements of purpose, place, platform, population and profit models are identified. Messenger uses this alternative typology productively to help identify the historic antecedents of virtual worlds, their future applications and topics for future research. What both these typologies demonstrate is that there is a range of virtual worlds, which offer distinctly different settings in which to site educational interventions. Within the typology outlined in Table 1, concrete educational activity can be identified in all four of the virtual world categories listed. The boundaries between these categories are soft and reflect the flexibility of some virtual worlds to provide more than one form of use. This is particularly true of SL, and has contributed to this platform’s high profile in comparison to other contemporary MUVEs. Although often defined as a 3-D social networking space, SL also supports role-playing game communities and some degree of cooperative workflow through the in-world tools and devices that have been built by residents. SL as the platform of choice for education SL represents the most mature of the social virtual world platforms, and the high usage figures compared with other competing platforms reflects this dominance within the educational world. The regular Eduserv virtual worlds survey conducted among UK tertiary educators has identified SL as the most popular educational MUVE: 416 British Journal of Educational Technology Vol 40 No 3 2009 © 2009 The Author. Journal compilation © 2009 Becta. Ta bl e 1 : A ty po lo gy of 3 -D vi rt ua lw or ld s (a da pt ed fr om M cK eo w n, 2 0 0 7 ) Fl ex ib le na rr at iv e So ci al w or ld Si m ul at io n W or ks pa ce G am es (M M P O R G s) an d se ri ou s ga m es So ci al pl at fo rm s, 3 -D ch at ro om s an d vi rt u al w or ld ge n er at or s Si m u la ti on s or re fle ct io n s of th e ‘r ea l’ 3 -D re al is at io n of C SC W s W or ld of W ar cr af t N ev er W in te r N ig ht s |
Understanding the Great Gatsby Curve | The Great Gatsby Curve, the observation that for OECD countries, greater crosssectional income inequality is associated with lower mobility, has become a prominent part of scholarly and policy discussions because of its implications for the relationship between inequality of outcomes and inequality of opportunities. We explore this relationship by focusing on evidence and interpretation of an intertemporal Gatsby Curve for the United States. We consider inequality/mobility relationships that are derived from nonlinearities in the transmission process of income from parents to children and the relationship that is derived from the effects of inequality of socioeconomic segregation, which then affects children. Empirical evidence for the mechanisms we identify is strong. We find modest reduced form evidence and structural evidence of an intertemporal Gatsby Curve for the US as mediated by social influences. Steven N. Durlauf Ananth Seshadri Department of Economics Department of Economics University of Wisconsin University of Wisconsin 1180 Observatory Drive 1180 Observatory Drive Madison WI, 53706 Madison WI, 53706 [email protected] [email protected] |
Packaging a $W$ -Band Integrated Module With an Optimized Flip-Chip Interconnect on an Organic Substrate | This paper, for the first time, presents successful integration of a W-band antenna with an organically flip-chip packaged silicon-germanium (SiGe) low-noise amplifier (LNA). The successful integration requires an optimized flip-chip interconnect. The interconnect performance was optimized by modeling and characterizing the flip-chip transition on a low-loss liquid crystal polymer organic substrate. When the loss of coplanar waveguide (CPW) lines is included, an insertion loss of 0.6 dB per flip-chip-interconnect is measured. If the loss of CPW lines is de-embedded, 0.25 dB of insertion loss is observed. This kind of low-loss flip-chip interconnect is essential for good performance of W-band modules. The module, which we present in this paper, consists of an end-fire Yagi-Uda antenna integrated with an SiGe BiCMOS LNA. The module is 3 mm × 1.9 mm and consumes only 19.2 mW of dc power. We present passive and active E- and H-plane radiation pattern measurements at 87, 90, and 94 GHz. Passive and active antennas both showed a 10-dB bandwidth of 10 GHz. The peak gain of passive and active antennas was 5.2 dBi at 90 GHz and 21.2 dBi at 93 GHz, respectively. The measurements match well with the simulated results. |
Power Management Strategy for Solar Stand-alone Hybrid Energy System | This paper presents a simulation and mathematical model of stand-alone solar-wind-diesel based (HES). A power management system is designed for multiple energy resources in a stand-alone hybrid energy system. Both Solar photovoltaic and wind energy conversion system consists maximum power point tracking (MPPT), voltage regulatio basic power electronic interfaces. An additional diesel generator is included to support and improve the reliability of stand when renewable energy sources are not available. A power management strategy is introduced to distribute the ge among resistive load banks. The frequency regulation is developed with conventional phase locked loop (PLL) system. The power management algorithm was applied in Matlab the results. Keywords—Solar photovoltaic, wind en hybrid energy system, power management, frequency and voltage regulation. |
State estimation and path following on curved and flat vertical surfaces with Omniclimber robots: Kinematics and control | Omnidirectional wheels used on Omniclimber inspection robot and in other robots enable a holonomic drive and a good maneuverability. On the other hand, they have a poor wheel traction and suffer from vertical and horizontal vibration, decreasing the trajectory following accuracy of the robot. In this study, we address this problem by integrating an orientation estimation and correction algorithm in the Omniclimber control by integration of an accelerometer. Moreover, since the Omniclimber chassis adapts to curved structures, the kinematics of the robot change when moving on a curved surface. We integrated an additional algorithm which corrects the robot's kinematics based on the curvature diameter and the current robot orientation. By integrating these two algorithms we could make remarkable improvements on the path following accuracy of the Omniclimber on flat and curved structures. |
PROCESS MINING WITH SEQUENCE CLUSTERING | III |
The Internet of Things: Challenges & security issues | Propelled by large-scale advances in wireless technologies, sensing technologies and communication technologies, the transformation of the Internet into an integrated network of things termed as Internet of Things is rapidly unfolding. The Internet of Things enabled by Wireless Sensor Networks (WSN) and RFID sensors finds a plethora of applications in almost all the fields such as health, education, transportation and agriculture. This paper briefs the idea of Internet of Things (IoT) and the challenges to its future growth. Also, this paper describes the general layered architecture of IoT along with its constituent elements. Further, the paper provides for a secure construction of the IoT architecture, by tackling security issues at each layer of the architecture. The paper concludes by mentioning the potential applications of the IoT technologies in fields ranging from intelligent transportation to smart home to e-health care and green agriculture. |
Desert Island Column: Marooned on Mars: Mind-Spinning Books for Software Engineers | by William J. ClanceyChief Scientist, Human-Centered ComputingNASA/Ames Research CenterMoffett Field, CA 94035 (*)(*) On leave from the Institute for Human-Machine Cognition, Universityof West Florida, Pensacola.To appear in the Automated Software Engineering Journal, published byKluwer Academic Publishers, in Volume 7, year 2000I've just arrived on Mars, with 500 days before the next return shuttle.Fortunately, we have email and internet access to Earth (the line is fast,but there's a twenty minute delay on average, almost like time-sharing inthe 70s). My powerbook fits fine on my lap, but slouching on thehabitat's long couch, I prefer holding a book in my hands, settling downwith gleeful anticipation with a warm drink nearby (it's -70c today). So Ibrought along an armload of books, some for reference, some to read andstudy again, and others to share with the next generation, as we build ourcolony. All are mind-spinning, just what we need for opening a new worldwith new ways of thinking.To start, I brought along Burrough's (1998) Dragonfly: NASA and theCrisis Aboard MIR (New York: HarperCollins Publishers), the story of theRussian-American misadventures on MIR. An expose with almostembarrassing detail about the inner-workings of Johnson Space Center inHouston, this book is best read with the JSC organization chart in hand.Here's the real world of engineering and life in extreme environments. Itmakes most other accounts of "requirements analysis" appear glib andsimplistic. The book vividly portrays the sometimes harrowingexperiences of the American astronauts in the web of Russianinterpersonal relations and literally in the web of MIR's wiring. Burrough'sexposition reveals how handling bureaucratic procedures and bulkyfacilities is as much a matter of moxie and goodwill as technical capability.Lessons from MIR showed NASA that getting to Mars required a differentview of knowledge and improvisation--long-duration missions are not at |
Missing data analyses: a hybrid multiple imputation algorithm using Gray System Theory and entropy based on clustering | Researchers and practitioners who use databases usually feel that it is cumbersome in knowledge discovery or application development due to the issue of missing data. Though some approaches can work with a certain rate of incomplete data, a large portion of them demands high data quality with completeness. Therefore, a great number of strategies have been designed to process missingness particularly in the way of imputation. Single imputation methods initially succeeded in predicting the missing values for specific types of distributions. Yet, the multiple imputation algorithms have maintained prevalent because of the further promotion of validity by minimizing the bias iteratively and less requirement on prior knowledge to the distributions. This article carefully reviews the state of the art and proposes a hybrid missing data completion method named Multiple Imputation using Gray-system-theory and Entropy based on Clustering (MIGEC). Firstly, the non-missing data instances are separated into several clusters. Then, the imputed value is obtained after multiple calculations by utilizing the information entropy of the proximal category for each incomplete instance in terms of the similarity metric based on Gray System Theory (GST). Experimental results on University of California Irvine (UCI) datasets illustrate the superiority of MIGEC to other current achievements on accuracy for either numeric or categorical attributes under different missing mechanisms. Further discussion on real aerospace datasets states MIGEC is also applicable for the specific area with both more precise inference and faster convergence than other multiple imputation methods in general. |
Kinematics Simulation and Analysis of 2-DOF Parallel Manipulator with Highly Redundant Actuation | In this paper, the mathematical model of the posture inverse kinematics is established. According to the structure of 2DOF parallel manipulator, the simulation model of mechanism is built using the Matlab/SimMechanics. The kinematics simulation of the parallel manipulator is obtained and confirmed correct. With the Virtual Reality Toolbox, the virtual reality of the parallel manipulator is carried out. During the simulation, the motion animate is obtained. It indicates that Matlab/Simulink can greatly reduce the designer’s work and provide a powerful and convenient tool for the simulation and analysis of parallel manipulators. Keywordsparallel manipulator; posture inverse kinematics; simulation; SimMechanics; virtual reality. |
Improved Laplacian Smoothing of Noisy Surface Meshes | This paper presents a technique for smoothing polygonal surface meshes that avoids the well-known problem of deformation and shrinkage caused by many smoothing methods, like e.g. the Laplacian algorithm. The basic idea is to push the vertices of the smoothed mesh back towards their previous locations. This technique can be also used in order to smooth unstructured point sets, by reconstructing a surface mesh to which the smoothing technique is applied. The key observation is that a surface mesh which is not necessarily topologically correct, but which can efficiently be reconstructed, is sufficient for that purpose. |
Combining Over-Sampling and Under-Sampling Techniques for Imbalance Dataset | An important problem in medical data analysis is imbalance dataset. This problem is a cause of diagnostic mistake. The results of diagnostic affect to life of patients. If a doctor fails in diagnostic of patient who have disease that means he cannot treat patient in timely. However, the problem can be easily solved by adding or removing the data to closely balance for performance of diagnostic in medically. This paper proposed a solution to adjust imbalance dataset by combining Neighbor Cleaning Rule (NCL) and Synthetic Minority Over-Sampling Technique (SMOTE) techniques. The process of work is using NCL technique for removing sample data that are outliers in majority class and SMOTE technique is used for increasing sample data in minority class to closely balance dataset. After that, the balanced medical dataset is classified by Naive Bayes, SMO and KNN algorithm. The experimental results show that the recall rate can be improved from the models that were created from balanced dataset. |
An LCD column driver using a switch capacitor DAC | LCD column drivers have traditionally used nonlinear R-string style digital-to-analog converters (DAC). This paper describes an architecture that uses 840 linear charge redistribution 10/12-bit DACs to implement a 420-output column driver. Each DAC performs its conversion in less than 15 /spl mu/s and draws less than 5 /spl mu/A. This architecture allows 10-bit independent color control in a 17 mm/sup 2/ die for the LCD television market. |
Compassion: an evolutionary analysis and empirical review. | What is compassion? And how did it evolve? In this review, we integrate 3 evolutionary arguments that converge on the hypothesis that compassion evolved as a distinct affective experience whose primary function is to facilitate cooperation and protection of the weak and those who suffer. Our empirical review reveals compassion to have distinct appraisal processes attuned to undeserved suffering; distinct signaling behavior related to caregiving patterns of touch, posture, and vocalization; and a phenomenological experience and physiological response that orients the individual to social approach. This response profile of compassion differs from those of distress, sadness, and love, suggesting that compassion is indeed a distinct emotion. We conclude by considering how compassion shapes moral judgment and action, how it varies across different cultures, and how it may engage specific patterns of neural activation, as well as emerging directions of research. |
Global Structure Optimization of Quadrilateral Meshes | We introduce a fully automatic algorithm which optimizes the high-level structure of a given quadrilateral mesh to achieve a coarser quadrangular base complex. Such a topological optimization is highly desirable, since stateof-the-art quadrangulation techniques lead to meshes which have an appropriate singularity distribution and an anisotropic element alignment, but usually they are still far away from the high-level structure which is typical for carefully designed meshes manually created by specialists and used e.g. in animation or simulation. In this paper we show that the quality of the high-level structure is negatively affected by helical configurations within the quadrilateral mesh. Consequently we present an algorithm which detects helices and is able to remove most of them by applying a novel grid preserving simplification operator (GP-operator) which is guaranteed to maintain an all-quadrilateral mesh. Additionally it preserves the given singularity distribution and in particular does not introduce new singularities. For each helix we construct a directed graph in which cycles through the start vertex encode operations to remove the corresponding helix. Therefore a simple graph search algorithm can be performed iteratively to remove as many helices as possible and thus improve the high-level structure in a greedy fashion. We demonstrate the usefulness of our automatic structure optimization technique by showing several examples with varying complexity. |
3D Object Representations for Fine-Grained Categorization | While 3D object representations are being revived in the context of multi-view object class detection and scene understanding, they have not yet attained wide-spread use in fine-grained categorization. State-of-the-art approaches achieve remarkable performance when training data is plentiful, but they are typically tied to flat, 2D representations that model objects as a collection of unconnected views, limiting their ability to generalize across viewpoints. In this paper, we therefore lift two state-of-the-art 2D object representations to 3D, on the level of both local feature appearance and location. In extensive experiments on existing and newly proposed datasets, we show our 3D object representations outperform their state-of-the-art 2D counterparts for fine-grained categorization and demonstrate their efficacy for estimating 3D geometry from images via ultra-wide baseline matching and 3D reconstruction. |
Chinese Preposition Selection for Grammatical Error Diagnosis | Misuse of Chinese prepositions is one of common word usage errors in grammatical error diagnosis. In this paper, we adopt the Chinese Gigaword corpus and HSK corpus as L1 and L2 corpora, respectively. We explore gated recurrent neural network model (GRU), and an ensemble of GRU model and maximum entropy language model (GRU-ME) to select the best preposition from 43 candidates for each test sentence. The experimental results show the advantage of the GRU models over simple RNN and n-gram models. We further analyze the effectiveness of linguistic information such as word boundary and part-of-speech tag in this task. |
Image Enhancement Using Adaptive Filtering | In this paper, we develop an image enhancement algorithm that modifies the local luminance mean of an image and controls the local contrast as a function of the local luminance mean of the image. The algorithm first separates an image into its lows (low pass filtered form) and highs (high pass filtered form) components. The lows component then controls the amplitude of the highs component to increase the local contrast. The lows component is then subjected to a non linearity to modify the local luminance mean of the image and is combined with the processed highs component. The performance of this algorithm when applied to enhance typical undegraded images, images with large shaded areas, and also images degraded by cloud cover will be illustrated by way of examples. Keywords— Image enhancement, Low pass filter, Contrast, NonLinearity, Adaptive Filter. |
TALL score for prediction of oncological outcomes after radical nephroureterectomy for high-grade upper tract urothelial carcinoma | We created a prognostic tool for the prediction of oncologic outcomes after radical nephroureterectomy (RNU) for high-grade non-metastatic upper tract urothelial carcinoma (UTUC). UTUC collaboration was utilized to include 586 patients who underwent RNU for non-metastatic high-grade UTUC. Survival outcomes were compared according to a score defined based on the sum of the independent prognostic variables. The study included 382 males with a median age 70 years (range 28–97). Independent prognostic factors included: T (t stage), A (architecture), LVI (lympho-vascular invasion) and L (lymphadenectomy). TALL score (1–7) was the sum of T (≤T1 = 1, T2 = 2, T3 = 3 and T4 = 4), A (papillary = 0 and sessile = 1), LVI (absent = 0 and present = 1) and L (lymphadenectomy = 0 and no lymphadenectomy = 1). Five-year disease-free survival (DFS) and cancer-specific survival (CSS) were stratified into four risk categories according to the TALL score: low (TALL 0–2; 86 % DFS and 90 % CSS), intermediate (TALL = 3; 71 % DFS and 75 % CSS), high (TALL = 4; 57 % DFS and 58 % CSS) and very high risk (TALL ≥ 5; 34 % DFS and 38 % CSS) using Kaplan–Meier survival analyses. TALL score was externally validated in a single-center cohort of 85 UTUC patients. We developed a multivariable prognostic tool for the prediction of oncological outcomes after RNU for high-grade UTUC. The score can be used for patient counseling, selection for adjuvant systemic therapies and design of clinical trials. |
Hierarchical Reinforcement Learning with Deep Nested Agents | Deep hierarchical reinforcement learning has gained a lot of attention in recent years due to its ability to produce state-of-the-art results in challenging environments where non-hierarchical frameworks fail to learn useful policies. However, as problem domains become more complex, deep hierarchical reinforcement learning can become inefficient, leading to longer convergence times and poor performance. We introduce the Deep Nested Agent framework, which is a variant of deep hierarchical reinforcement learning where information from the main agent is propagated to the low level nested agent by incorporating this information into the nested agent’s state. We demonstrate the effectiveness and performance of the Deep Nested Agent framework by applying it to three scenarios in Minecraft with comparisons to a deep non-hierarchical single agent framework, as well as, a deep hierarchical framework. |
MegaFace: A Million Faces for Recognition at Scale | Recent face recognition experiments on the LFW [13] benchmark show that face recognition is performing stunningly well, surpassing human recognition rates. In this paper, we study face recognition at scale. Specifically, we have collected from Flickr a Million faces and evaluated state of the art face recognition algorithms on this dataset. We found that the performance of algorithms varies–while all perform great on LFW, once evaluated at scale recognition rates drop drastically for most algorithms. Interestingly, deep learning based approach by [23] performs much better, but still gets less robust at scale. We consider both verification and identification problems, and evaluate how pose affects recognition at scale. Moreover, we ran an extensive human study on Mechanical Turk to evaluate human recognition at scale, and report results. All the photos are creative commons photos and are released for research and further experiments on http://megaface. cs.washington.edu. |
IP-Geolocation Mapping for Moderately Connected Internet Regions | Most IP-geolocation mapping schemes [14], [16], [17], [18] take delay-measurement approach, based on the assumption of a strong correlation between networking delay and geographical distance between the targeted client and the landmarks. In this paper, however, we investigate a large region of moderately connected Internet and find the delay-distance correlation is weak. But we discover a more probable rule - with high probability the shortest delay comes from the closest distance. Based on this closest-shortest rule, we develop a simple and novel IP-geolocation mapping scheme for moderately connected Internet regions, called GeoGet. In GeoGet, we take a large number of webservers as passive landmarks and map a targeted client to the geolocation of the landmark that has the shortest delay. We further use JavaScript at targeted clients to generate HTTP/Get probing for delay measurement. To control the measurement cost, we adopt a multistep probing method to refine the geolocation of a targeted client, finally to city level. The evaluation results show that when probing about 100 landmarks, GeoGet correctly maps 35.4 percent clients to city level, which outperforms current schemes such as GeoLim [16] and GeoPing [14] by 270 and 239 percent, respectively, and the median error distance in GeoGet is around 120 km, outperforming GeoLim and GeoPing by 37 and 70 percent, respectively. |
Differences in the Serum Nonesterified Fatty Acid Profile of Young Women Associated with a Recent History of Gestational Diabetes and Overweight/Obesity | BACKGROUND
Nonesterified fatty acids (NEFA) play pathophysiological roles in metabolic syndrome and type 2 diabetes (T2D). In this study, we analyzed the fasting NEFA profiles of normoglycemic individuals at risk for T2D (women with a recent history of gestational diabetes (GDM)) in comparison to controls (women after a normoglycemic pregnancy). We also examined the associations of NEFA species with overweight/obesity, body fat distribution and insulin sensitivity.
SUBJECTS AND METHODS
Using LC-MS/MS, we analyzed 41 NEFA species in the fasting sera of 111 women (62 post-GDM, 49 controls). Clinical characterization included a five-point oral glucose tolerance test (OGTT), biomarkers and anthropometrics, magnetic resonance imaging (n = 62) and a food frequency questionnaire. Nonparametric tests with Bonferroni correction, binary logistic regression analyses and rank correlations were used for statistical analysis.
RESULTS
Women after GDM had a lower molar percentage of total saturated fatty acids (SFA; 38.55% vs. 40.32%, p = 0.0002) than controls. At an explorative level of significance several NEFA species were associated with post-GDM status (with and without adjustment for body mass index (BMI) and HbA1c): The molar percentages of 14:0, 16:0, 18:0 and 18:4 were reduced, whereas those of 18:1, 18:2, 20:2, 24:4, monounsaturated fatty acids (MUFA), polyunsaturated fatty acids (PUFA) and total n-6 NEFA were increased. BMI and the amount of body fat correlated inversely with several SFA and MUFA and positively with various PUFA species over the whole study cohort (abs(ρ)≥0.3 for all). 14:0 was inversely and BMI-independently associated with abdominal visceral adiposity. We saw no correlations of NEFA species with insulin sensitivity and the total NEFA concentration was similar in the post-GDM and the control group.
CONCLUSION
In conclusion, we found alterations in the fasting NEFA profile associated with a recent history of gestational diabetes, a risk marker for T2D. NEFA composition also varied with overweight/obesity and with body fat distribution, but not with insulin sensitivity. |
C ] 2 5 N ov 2 01 5 How predictable is technological progress ? ∗ | Recently it has become clear that many technologies follow a generalized version of Moore’s law, i.e. costs tend to drop exponentially, at different rates that depend on the technology. Here we formulate Moore’s law as a correlated geometric random walk with drift, and apply it to historical data on 53 technologies. We derive a closed form expression approximating the distribution of forecast errors as a function of time. Based on hind-casting experiments we show that this works well, making it possible to collapse the forecast errors for many different technologies at different time horizons onto the same universal distribution. This is valuable because it allows us to make forecasts for any given technology with a clear understanding of the quality of the forecasts. As a practical demonstration we make distributional forecasts at different time horizons for solar photovoltaic modules, and show how our method can be used to estimate the probability that a given technology will outperform another technology ∗Acknowledgements: We would like to acknowledge Diana Greenwald and Aimee Bailey for their help in gathering and selecting data, as well as Glen Otero for help acquiring data on genomics, Chris Goodall for bringing us up to date on developments in solar PV, and Christopher Llewellyn Smith, Jeff Alstott, Michael Totten, our INET colleagues and three referees for comments. This project was supported by the European Commission project FP7-ICT-2013-611272 (GROWTHCOM) and by the U.S. Dept. of Solar Energy Technologies Office under grant DE-EE0006133. Contacts: [email protected]; [email protected] at a given point in the future. |
Wigner distributions and how they relate to the light field | In wave optics, the Wigner distribution and its Fourier dual, the ambiguity function, are important tools in optical system simulation and analysis. The light field fulfills a similar role in the computer graphics community. In this paper, we establish that the light field as it is used in computer graphics is equivalent to a smoothed Wigner distribution and that these are equivalent to the raw Wigner distribution under a geometric optics approximation. Using this insight, we then explore two recent contributions: Fourier slice photography in computer graphics and wavefront coding in optics, and we examine the similarity between explanations of them using Wigner distributions and explanations of them using light fields. Understanding this long-suspected equivalence may lead to additional insights and the productive exchange of ideas between the two fields. |
Genetic, hormonal and metabolic aspects of PCOS: an update | Polycystic ovary syndrome (PCOS) is a complex endocrine disorder affecting 5-10 % of women of reproductive age. It generally manifests with oligo/anovulatory cycles, hirsutism and polycystic ovaries, together with a considerable prevalence of insulin resistance. Although the aetiology of the syndrome is not completely understood yet, PCOS is considered a multifactorial disorder with various genetic, endocrine and environmental abnormalities. Moreover, PCOS patients have a higher risk of metabolic and cardiovascular diseases and their related morbidity, if compared to the general population. |
Policing cyber-neighbourhoods : tension monitoring and social media networks | We propose that late modern policing practices, that rely on neighbourhood intelligence, the monitoring of tensions, surveillance and policing by accommo-dation, need to be augmented in light of emerging ‘cyber-neighbourhoods’, namely social media networks. The 2011 riots in England were the first to evidence the widespread use of social media platforms to organise and respond to disorder. The police were ill-equipped to make use of the intelligence emerging from these non-terrestrial networks and were found to be at a disadvantage to the more tech-savvy rioters and the general public. In this paper, we outline the development of the ‘tension engine’ component of the Cardiff Online Social Media ObServatroy (COSMOS). This engine affords users with the ability to monitor social media data streams for signs of high tension which can be analysed in order to identify deviations from the ‘norm’ (levels of cohesion/low tension). This analysis can be overlaid onto a palimpsest of curated data, such as official statistics about neighbourhood crime, deprivation and demography, to provide a multidimensional picture of the ‘terrestrial’ and ‘cyber’ streets. As a consequence, this ‘neighbourhood informatics’ enables a means of questioning official constructions of civil unrest through reference to the user-generated accounts of social media and their relationship to other, curated, social and economic data. |
Inductive Representation Learning on Large Graphs | Low-dimensional embeddings of nodes in large graphs have proved extremely useful in a variety of prediction tasks, from content recommendation to identifying protein functions. However, most existing approaches require that all nodes in the graph are present during training of the embeddings; these previous approaches are inherently transductive and do not naturally generalize to unseen nodes. Here we present GraphSAGE, a general inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings for previously unseen data. Instead of training individual embeddings for each node, we learn a function that generates embeddings by sampling and aggregating features from a node’s local neighborhood. Our algorithm outperforms strong baselines on three inductive node-classification benchmarks: we classify the category of unseen nodes in evolving information graphs based on citation and Reddit post data, and we show that our algorithm generalizes to completely unseen graphs using a multi-graph dataset of protein-protein interactions. |
Parallel vision for perception and understanding of complex scenes: methods, framework, and perspectives | In the study of image and vision computing, the generalization capability of an algorithm often determines whether it is able to work well in complex scenes. The goal of this review article is to survey the use of photorealistic image synthesis methods in addressing the problems of visual perception and understanding. Currently, the ACP Methodology comprising artificial systems, computational experiments, and parallel execution is playing an essential role in modeling and control of complex systems. This paper extends the ACP Methodology into the computer vision field, by proposing the concept and basic framework of Parallel Vision. In this paper, we first review previous works related to Parallel Vision, in terms of synthetic data generation and utilization. We detail the utility of synthetic data for feature analysis, object analysis, scene analysis, and other analyses. Then we propose the basic framework of Parallel Vision, which is composed of an ACP trilogy (artificial scenes, computational experiments, and parallel execution). We also present some in-depth thoughts and perspectives on Parallel Vision. This paper emphasizes the significance of synthetic data to vision system design and suggests a novel research methodology for perception and understanding of complex scenes. |
Tooling Framework for Instantiating Natural Language Querying System | Recent times have seen a growing demand for natural language querying (NLQ) interfaces to retrieve information from the structured data sources such as knowledge bases. Using this interface, business users can directly interact with a database without the knowledge of the query language or the data schema. Our earlier work describes a natural language query engine called ATHENA which has several shortcoming around ease of use and compatibility with data stores, formats and flows. In this demonstration paper, we present a tooling framework to address these challenges so that one can instantiate an NLQ system with utmost ease. Our framework makes it easy and practically applicable to all NLIDB scenarios involving different sources of structured data, file formats, and ontologies to enable natural language querying on top of them with minimal human configuration. We present the tool design and the solution to the challenges towards building such a system and demonstrate its applicability in the medical domain. PVLDB Reference Format: Manasa Jammi, Jaydeep Sen, Ashish Mittal, Sagar Verma, Vardaan Pahuja, Rema Ananthanarayanan, Pranay Lohia, Hima Karanam, Diptikalyan Saha, Karthik Sankaranarayanan. Tooling Framework for Instantiating Natural Language Querying System. PVLDB, 11 (12): 2014-2017, 2018. DOI: https://doi.org/10.14778/3229863.3236248 |
Current ablation techniques for persistent atrial fibrillation: results of the European Heart Rhythm Association Survey. | The aim of this survey was to provide insight into current practice regarding ablation of persistent atrial fibrillation (AF) among members of the European Heart Rhythm Association electrophysiology research network. Thirty centres responded to the survey. The main ablation technique for first-time ablation was stand-alone pulmonary vein isolation (PVI): in 67% of the centres for persistent but not long-standing AF and in 37% of the centres for long-standing persistent AF as well. Other applied techniques were ablation of fractionated electrograms, placement of linear lesions, stepwise approach until AF termination, and substrate mapping and isolation of low-voltage areas. However, the percentage of centres applying these techniques during first ablation did not exceed 25% for any technique. When stand-alone PVI was performed in patients with persistent but not long-standing AF, the majority (80%) of the centres used an irrigated radiofrequency ablation catheter whereas 20% of the respondents used the cryoballoon. Similar results were reported for ablation of long-standing persistent AF (radiofrequency 90%, cryoballoon 10%). Neither rotor mapping nor one-shot ablation tools were used as the main first-time ablation methods. Systematic search for non-pulmonary vein triggers was performed only in 10% of the centres. Most common 1-year success rate off antiarrhythmic drugs was 50-60%. Only 27% of the centres knew their 5-year results. In conclusion, patients with persistent AF represent a significant proportion of AF patients undergoing ablation. There is a shift towards stand-alone PVI being the primary choice in many centres for first-time ablation in these patients. The wide variation in the use of additional techniques and in the choice of endpoints reflects the uncertainties and lack of guidance regarding the most optimal approach. Procedural success rates are modest and long-term outcomes are unknown in most centres. |
Which Process Metrics Can Significantly Improve Defect Prediction Models? An Empirical Study | The knowledge about the software metrics which serve as defect indicators is vital for the efficient allocation of resources for quality assurance. It is the process metrics, although sometimes difficult to collect, which have recently become popular with regard to defect prediction. However, in order to identify rightly the process metrics which are actually worth collecting, we need the evidence validating their ability to improve the product metric-based defect prediction models. This paper presents an empirical evaluation in which several process metrics were investigated in order to identify the ones which significantly improve the defect prediction models based on product metrics. Data from a wide range of software projects (both, industrial and open source) were collected. The predictions of the models that use only product metrics (simple models) were compared with the predictions of the models which used product metrics, as well as one of the process metrics under scrutiny (advanced models). To decide whether the improvements were significant or not, statistical tests were performed and effect sizes were calculated. The advanced defect prediction models trained on a data set containing product metrics and additionally Number of Distinct Committers (NDC) were significantly better than the simple models without NDC, while the effect size was medium and the probability of superiority (PS) of the advanced models over simple ones was high (p = .016, r = .29, PS = .76), which is a substantial finding useful in defect prediction. A similar result with slightly smaller PS was achieved by the advanced models trained on a data set containing product metrics and additionally all of the investigated process metrics (p = .038, r = -.29, PS = .68). The advanced models trained on a data set containing product metrics and additionally Number of Modified Lines (NML) were significantly better than the simple models without NML, but the effect size was small (p = .038, r = .06). Hence, it is reasonable to recommend the NDC process metric in building the defect prediction models. |
Mindfulness Therapy and its Effects on Working Memory and Prospective Memory By | Purpose: Mindfulness therapy is an increasingly popular practice that involves acute awareness of the present moment (Fletcher & Hayes, 2005). Recent research has also shown that practitioners show improvements in a range of cognitive skills (Mrazek, Franklin, Phillips, Baird, & Schooler, 2013). In particular, individuals in past literature have shown significant benefits in their working memory when practicing (Mrazek et al., 2013). Working memory is a cognitive ability that has also been correlated with an individual’s prospective memory during high stress situations (Jha, Stanley, Kiyonaga, Wong, & Gelfand, 2010). This raises the question; can mindfulness therapy impact both your working memory and prospective memory? Searching this can lead to potential benefits in aiding individuals with memory deficits. In particular, elders who suffer from dementia are more prone to prospective memory loss (Zimmermann & Meier, 2006). Therefore, if benefits of mindfulness therapy are supported in this study, further research could be performed for the elder population. Methods: Participants were 65 undergraduate students from the University of MichiganDearborn. The students were evaluated for their level of mindfulness practice by a survey followed by three computerized exams. All individuals received two lexical decision tasks in randomized order, followed by the automated operation span task. These tasks measured the individuals’ ability to briefly retain information (working memory capacity) as well as completing a future task (prospective memory). Results: Results from this study showed support for previous working memory research suggesting that mindfulness therapy has a significant correlation with higher working memory (Mrazek et al., 2013). The research also showed that regular engagement in mindfulness also results in notable changes in how we complete future intentions, prospective memory. In particular it was displayed that individuals who practice specifically meditation have an improved focal prospective memory and a reduced ability in nonfocal prospective memory. Conclusion: All individuals who practice mindfulness displayed a better working memory compared to those who do not. Individuals who explicitly practice meditation also have a change in their prospective memory. Those who practice meditation showed an improvement in their focal cues but a noteworthy impairment in their nonfocal cues. This suggests that those who practice meditation are so focused on the present that it impacts their ability to perform certain nonfocal future tasks. These findings are discussed in a larger framework of applications to individuals with declining cognitive abilities and/or high prospective memory demands. (e.g., medication adherence. MINDFULNESS THERAPY AND ITS EFFECTS ON MEMORY 1 Chapter |
Emoji as Emotion Tags for Tweets | In many natural language processing tasks, supervised machine learning approaches have proved most effective, and substantial effort has been made into collecting and annotating corpora for building such models. Emotion detection from text is no exception; however, research in this area is in its relative infancy, and few emotion annotated corpora exist to date. A further issue regarding the development of emotion annotated corpora is the difficulty of the annotation task and resulting inconsistencies in human annotations. One approach to address these problems is to use self-annotated data, using explicit indications of emotions included by the author of the data in question. We present a study of the use of unicode emoji as self-annotation of a Twitter user’s emotional state. Emoji are found to be used far more extensively than hash tags and we argue that they present a more faithful representation of a user’s emotional state. A substantial set of tweets containing emotion indicative emoji are collected and a sample annotated for emotions. The accuracy and utility of emoji as emotion labels are evaluated directly (with respect to annotations) and through trained statistical models. Results are cautiously optimistic and suggest further study of emotji usage. |
Hungarian Layer: Logics Empowered Neural Architecture | Paraphrase identification is an important topic in artificial intelligence and this task is often tackled as sequence alignment and matching. Traditional alignment methods take advantage of attention mechanism, which is a soft-max weighting technique. Weighting technique could pick out the most similar/dissimilar parts, but is weak in modeling the aligned unmatched parts, which are the crucial evidence to identify paraphrase. In this paper, we empower neural architecture with Hungarian algorithm to extract the aligned unmatched parts. Specifically, first, our model applies BiLSTM to parse the input sentences into hidden representations. Then, Hungarian layer leverages the hidden representations to extract the aligned unmatched parts. Last, we apply cosine similarity to metric the aligned unmatched parts for a final discrimination. Extensive experiments show that our model outperforms other baselines, substantially and significantly. |
Degradation of Gamma-oryzanol in Rice Bran Oil during Heating : An Analysis Using Derivative UV-spectrophotometry | Gamma-oryzanol, a group of ferulic acid esters of phytosterols and triterpene alcohols, has been reported to exhibit antioxidant activity and other health beneficial properties. In this study, the degradation of gamma-oryzanol in rice bran oil during the heat treatment was investigated. The quantitative analysis of the gammaoryzanol in rice bran oil was performed by second order derivative UV-spectrophotometry. The degradation kinetics was described by the first-order reaction model and degradation rate constants were 0.0089, 0.0315 and 0.0763 at 120, 150 and 200oC, respectively. Temperature dependence of the rate constant obeyed Arrhenius relationship, and activation energy was calculated to be 40.76 kJ/mol. |
A spelling device for the paralysed | When Jean-Dominique Bauby suffered from a cortico-subcortical stroke that led to complete paralysis with totally intact sensory and cognitive functions, he described his experience in The Diving-Bell and the Butterfly as “something like a giant invisible diving-bell holds my whole body prisoner”. This horrifying condition also occurs as a consequence of a progressive neurological disease, amyotrophic lateral sclerosis, which involves progressive degeneration of all the motor neurons of the somatic motor system. These ‘locked-in’ patients ultimately become unable to express themselves and to communicate even their most basic wishes or desires, as they can no longer control their muscles to activate communication devices. We have developed a new means of communication for the completely paralysed that uses slow cortical potentials (SCPs) of the electro-encephalogram to drive an electronic spelling device. |
A Cost-Effective Deadline-Constrained Dynamic Scheduling Algorithm for Scientific Workflows in a Cloud Environment | Cloud computing, a distributed computing paradigm, enables delivery of IT resources over the Internet and follows the pay-as-you-go billing model. Workflow scheduling is one of the most challenging problems in cloud computing. Although, workflow scheduling on distributed systems like grids and clusters have been extensively studied, however, these solutions are not viable for a cloud environment. It is because, a cloud environment differs from other distributed environment in two major ways: on-demand resource provisioning and pay-as-you-go pricing model. Thus, to achieve the true benefits of workflow orchestration onto cloud resources novel approaches that can capitalize the advantages and address the challenges specific to a cloud environment needs to be developed. This work proposes a dynamic cost-effective deadline-constrained heuristic algorithm for scheduling a scientific workflow in a public cloud. The proposed technique aims to exploit the advantages offered by cloud computing while taking into account the virtual machine (VM) performance variability and instance acquisition delay to identify a just-in-time schedule of a deadline constrained scientific workflow at lesser costs. Performance evaluation on some well-known scientific workflows exhibit that the proposed algorithm delivers better performance in comparison to the current state-of-the-art heuristics. |
Achieving Holistic Health for the Individual through Person-Centered Collaborative Care Supported by Informatics | OBJECTIVES
This article seeks to describe the current state of informatics supported collaborative care and to point out areas of future research in this highly interdisciplinary field.
METHODS
In this article, person-centered collaborative care is seen as a concept addressing both the provision of care over organizational borders between health and social care, and within care teams as well as the changed patient/client-care provider relationship characterized by increased patient/client involvement.
RESULTS
From a health systems perspective, there are several attempts to describe the conceptual and theoretical basis for collaborative care indicating that agreement on core concepts and terminology is difficult. From an informatics perspective, focus is on standardization of clinical content models and terminology to achieve interoperability of information technology systems and to support standardized care pathways. Few examples look into how ad-hoc collaborative care processes can be supported using information technology and informatics standards. Nevertheless, promising examples do exist showing that integrational Information Communication Technology services can be supportive for collaborative care developments. However, the current landscape consists of many fragmented, often technology-driven eHealth solutions targeting specific diagnostic groups in geographically and/or organizationally restricted settings.
CONCLUSIONS
A systematic approach incorporating organizational, clinical, informatics and social science knowledge is needed to perform further research in areas such as virtual team partnerships, new paradigms of care delivery, data and knowledge management as well as its secure sharing. Also organizational and legal aspects need to be further researched in order to facilitate the coordinated provision of health and social care to citizens including self-management, utilizing informatics support in a societal context. |
Reinforcement Learning in Topology-based Representation for Human Body Movement with Whole Arm Manipulation | Moving a human body or a large and bulky object can require the strength of whole arm manipulation (WAM). This type of manipulation places the load on the robot’s arms and relies on global properties of the interaction to succeed— rather than local contacts such as grasping or non-prehensile pushing. In this paper, we learn to generate motions that enable WAM for holding and transporting of humans in certain rescue or patient care scenarios. We model the task as a reinforcement learning problem in order to provide a behavior that can directly respond to external perturbation and human motion. For this, we represent global properties of the robot-human interaction with topology-based coordinates that are computed from arm and torso positions. These coordinates also allow transferring the learned policy to other body shapes and sizes. For training and evaluation, we simulate a dynamic sea rescue scenario and show in quantitative experiments that the policy can solve unseen scenarios with differently-shaped humans, floating humans, or with perception noise. Our qualitative experiments show the subsequent transporting after holding is achieved and we demonstrate that the policy can be directly transferred to a real world setting. |
Regularizing Deep Neural Networks by Noise: Its Interpretation and Optimization | Overfitting is one of the most critical challenges in deep neural networks, and there are various types of regularization methods to improve generalization performance. Injecting noises to hidden units during training, e.g., dropout, is known as a successful regularizer, but it is still not clear enough why such training techniques work well in practice and how we can maximize their benefit in the presence of two conflicting objectives—optimizing to true data distribution and preventing overfitting by regularization. This paper addresses the above issues by 1) interpreting that the conventional training methods with regularization by noise injection optimize the lower bound of the true objective and 2) proposing a technique to achieve a tighter lower bound using multiple noise samples per training example in a stochastic gradient descent iteration. We demonstrate the effectiveness of our idea in several computer vision applications. |
Retransplant candidates have donor-specific antibodies that react with structurally defined HLA-DR,DQ,DP epitopes. | This report describes a detailed analysis how donor-specific HLA class II epitope mismatching affects antibody reactivity patterns in 75 solid organ transplant recipients with an in situ allograft and who were considered for retransplantation. Sera were tested for antibodies in a sensitive antigen-binding assay (Luminex) with single class II alleles. Their reactivity was analyzed with HLAMatchmaker, a structural matching algorithm that considers so-called eplets to define epitopes recognized by antibodies. Only 24% of the patients showed donor-specific anti-DRB1 antibodies and there was a significant correlation with a low number of mismatched DRB1 eplets. This low detection rate of anti-DRB1 antibodies may also be due to allograft absorption. In contrast, antibodies to DRB3/4/5 mismatches were more common. Especially, 83% of the DRB4 (DR53) mismatches resulted in detectable antibodies against an eplet uniquely found on DR53 antigens. Donor-specific DQB mismatches led to detectable anti-DQB antibodies with a frequency of 87%. Their specificity correlated with eplets uniquely found on DQ1-4. The incidence of antibodies induced by 2-digit DQA mismatches was 64% and several eplets appeared to play a dominant role. These findings suggest that both alpha and beta chains of HLA-DQ heterodimers have immunogenic epitopes that can elicit specific antibodies. About one-third of the sera had anti-DP antibodies; they reacted primarily with two DPB eplets and an allelic pair of DPA eplets. These data demonstrate that HLA class II reactive sera display distinct specificity patterns associated with structurally defined epitopes on different HLA-D alleles. |
Anticipation of increasing monetary reward selectively recruits nucleus accumbens. | Comparative studies have implicated the nucleus accumbens (NAcc) in the anticipation of incentives, but the relative responsiveness of this neural substrate during anticipation of rewards versus punishments remains unclear. Using event-related functional magnetic resonance imaging, we investigated whether the anticipation of increasing monetary rewards and punishments would increase NAcc blood oxygen level-dependent contrast (hereafter, "activation") in eight healthy volunteers. Whereas anticipation of increasing rewards elicited both increasing self-reported happiness and NAcc activation, anticipation of increasing punishment elicited neither. However, anticipation of both rewards and punishments activated a different striatal region (the medial caudate). At the highest reward level ($5.00), NAcc activation was correlated with individual differences in self-reported happiness elicited by the reward cues. These findings suggest that whereas other striatal areas may code for expected incentive magnitude, a region in the NAcc codes for expected positive incentive value. |
Online bagging and boosting | Bagging and boosting are two of the most well-known ensemble learning methods due to their theoretical performance guarantees and strong experimental results. However, these algorithms have been used mainly in batch mode, i.e., they require the entire training set to be available at once and, in some cases, require random access to the data. In this paper, we present online versions of bagging and boosting that require only one pass through the training data. We build on previously presented work by describing some theoretical results. We also compare the online and batch algorithms experimentally in terms of accuracy and running time. |
Systematic review of effects of current transtibial prosthetic socket designs--Part 2: Quantitative outcomes. | This review is an attempt to untangle the complexity of transtibial prosthetic socket fit and perhaps find some indication of whether a particular prosthetic socket type might be best for a given situation. In addition, we identified knowledge gaps, thus providing direction for possible future research. We followed the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines, using medical subject headings and standard key words to search for articles in relevant databases. No restrictions were made on study design and type of outcome measure used. From the obtained search results (n = 1,863), 35 articles were included. The relevant data were entered into a predefined data form that included the Downs and Black risk of bias assessment checklist. This article presents the results from the systematic review of the quantitative outcomes (n = 27 articles). Trends indicate that vacuum-assisted suction sockets improve gait symmetry, volume control, and residual limb health more than other socket designs. Hydrostatic sockets seem to create less inconsistent socket fittings, reducing a problem that greatly influences outcome measures. Knowledge gaps exist in the understanding of clinically meaningful changes in socket fit and its effect on biomechanical outcomes. Further, safe and comfortable pressure thresholds under various conditions should be determined through a systematic approach. |
SHRIMP U-Pb dating for zircons from pyroxene andesite of Shuiquangou Formation in western Liaoning province and its tectonic significance. | The Shuiquangou Formation, located in the Lingyuan area of western Liaoning, lays unconformably over the first thrust system formed during the Early Mesozoic and is covered by a early Yanshanian thrust. The SHRIMP age of zircons from pyroxene andesite of Shuiquangou Formation is 230.4 Ma±3.1 Ma. The result shows that the age of Shuiquangou Formation is the Late Triassic; the lithosphere in western Liaoning is in a stretching setting during the Late Triassic and the thrusting of the early Early Mesozoic in Yanshan mountains was occurred in the Early and Middle Triassic. It suggests that the early stage of Indochina movement is thrusting and the late is extension. |
WebChild: harvesting and organizing commonsense knowledge from the web | This paper presents a method for automatically constructing a large commonsense knowledge base, called WebChild, from Web contents. WebChild contains triples that connect nouns with adjectives via fine-grained relations like hasShape, hasTaste, evokesEmotion, etc. The arguments of these assertions, nouns and adjectives, are disambiguated by mapping them onto their proper WordNet senses. Our method is based on semi-supervised Label Propagation over graphs of noisy candidate assertions. We automatically derive seeds from WordNet and by pattern matching from Web text collections. The Label Propagation algorithm provides us with domain sets and range sets for 19 different relations, and with confidence-ranked assertions between WordNet senses. Large-scale experiments demonstrate the high accuracy (more than 80 percent) and coverage (more than four million fine grained disambiguated assertions) of WebChild. |
Evaluating Word Embeddings in Multi-label Classification Using Fine-grained Name Typing | Embedding models typically associate each word with a single real-valued vector, representing its different properties. Evaluation methods, therefore, need to analyze the accuracy and completeness of these properties in embeddings. This requires fine-grained analysis of embedding subspaces. Multi-label classification is an appropriate way to do so. We propose a new evaluation method for word embeddings based on multi-label classification given a word embedding. The task we use is finegrained name typing: given a large corpus, find all types that a name can refer to based on the name embedding. Given the scale of entities in knowledge bases, we can build datasets for this task that are complementary to the current embedding evaluation datasets in: they are very large, contain fine-grained classes, and allow the direct evaluation of embeddings without confounding factors like sentence context. |
Intracranial aneurysms: midterm outcome of pipeline embolization device--a prospective study in 143 patients with 178 aneurysms. | PURPOSE
To evaluate the midterm clinical and angiographic outcomes after pipeline embolization device (PED) placement for treatment of intracranial aneurysms.
MATERIALS AND METHODS
This prospective nonrandomized multicenter study was approved by the review boards of all involved centers; informed consent was obtained. Patients (143 patients, 178 aneurysms) with unruptured saccular or fusiform aneurysms or recurrent aneurysms after previous treatment were included and observed angiographically for up to 18 months and clinically for up to 3 years. Study endpoints included complete aneurysm occlusion; neurologic complications within 30 days and up to 3 years; clinical outcome of cranial nerve palsy after PED placement; angiographic evidence of occlusion or stenosis of parent artery and that of occlusion of covered side branches at 6, 12, and 18 months; and clinical and computed tomographic evidence of perforator infarction.
RESULTS
There were five (3.5%) cases of periprocedural death or major stroke (modified Rankin Scale [mRS] > 3) (95% confidence interval [CI]: 1.3%, 8.4%), including two posttreatment delayed ruptures, two intracerebral hemorrhages, and one thromboembolism. Five (3.5%) patients had minor neurologic complications within 30 days (mRS = 1) (95% CI: 1.3%, 8.4%), including transient ischemic attack (n = 2), small cerebral infarction (n = 2), and cranial nerve palsy (n = 1). Beyond 30 days, there was one fatal intracerebral hemorrhage and one transient ischemic attack. Ten of 13 patients (95% CI: 46%, 93.8%) completely recovered from symptoms of cranial nerve palsy within a median of 3.5 months. Angiographic results at 18 months revealed a complete aneurysm occlusion rate of 84% (49 of 58; 95% CI: 72.1%, 92.2%), with no cases of parent artery occlusion, parent artery stenosis (<50%) in three patients, and occlusion of a covered side branch in two cases (posterior communicating arteries). Perforator infarction did not occur.
CONCLUSION
PED placement is a reasonably safe and effective treatment for intracranial aneurysms. The treatment is promising for aneurysms of unfavorable morphologic features, such as wide neck, large size, fusiform morphology, incorporation of side branches, and posttreatment recanalization, and should be considered a first choice for treating unruptured aneurysms and recurrent aneurysms after previous treatments.
SUPPLEMENTAL MATERIAL
http://radiology.rsna.org/lookup/suppl/doi:10.1148/radiol.12120422/-/DC1. |
signSGD: compressed optimisation for non-convex problems | Training large neural networks requires distributing learning across multiple workers, where the cost of communicating gradients can be a significant bottleneck. SIGNSGD alleviates this problem by transmitting just the sign of each minibatch stochastic gradient. We prove that it can get the best of both worlds: compressed gradients and SGD-level convergence rate. The relative `1/`2 geometry of gradients, noise and curvature informs whether SIGNSGD or SGD is theoretically better suited to a particular problem. On the practical side we find that the momentum counterpart of SIGNSGD is able to match the accuracy and convergence speed of ADAM on deep Imagenet models. We extend our theory to the distributed setting, where the parameter server uses majority vote to aggregate gradient signs from each worker enabling 1-bit compression of worker-server communication in both directions. Using a theorem by Gauss (1823) we prove that majority vote can achieve the same reduction in variance as full precision distributed SGD. Thus, there is great promise for sign-based optimisation schemes to achieve fast communication and fast convergence. Code to reproduce experiments is to be found at https://github.com/jxbz/signSGD. |
Altered nucleus accumbens circuitry mediates pain-induced antinociception in morphine-tolerant rats. | We investigated the effect of chronic administration of morphine on noxious stimulus-induced antinociception (NSIA) produced by intraplantar capsaicin injection. In the untreated (naive) rat, we previously found that NSIA depends on activation of dopamine, nicotinic acetylcholine, and mu- and delta-opioid receptors in nucleus accumbens. Rats chronically implanted with subcutaneous morphine pellets demonstrated tolerance to the antinociceptive effects of acute systemic morphine administration but did not show cross-tolerance to NSIA. Morphine pretreatment, however, significantly reduced NSIA dependence on intra-accumbens opioid receptors but not on dopamine or nicotinic acetylcholine receptors. As observed in naive rats, intra-accumbens microinjection of either the dopamine receptor antagonist flupentixol or the nicotinic receptor antagonist mecamylamine blocked NSIA in rats tolerant to the antinociceptive effects of morphine, but, in contrast to naive rats, intra-accumbens microinjection of either the mu-receptor antagonist Cys2,Tyr3,Orn5,Pen7 amide or the delta-receptor antagonist naltrindole failed to block NSIA. These findings suggest that although NSIA is dependent on nucleus accumbens opioid receptors in the naive state, this dependence disappears in rats tolerant to the antinociceptive effects of morphine, which may account for the lack of NSIA cross-tolerance. In separate experiments, intra-accumbens extracellular dopamine levels were measured using microdialysis. Dopamine levels increased after either capsaicin or systemic morphine administration in naive rats but only after capsaicin administration in morphine pretreated rats. Thus, intra-accumbens dopamine release paralleled antinociceptive responses in naive and morphine pretreated rats. |
Examination of ossification of the distal radial epiphysis using magnetic resonance imaging. New insights for age estimation in young footballers in FIFA tournaments. | Alongside a variety of clinical and forensic issues, age determination in living persons also plays a decisive role in the field of professional sport. Only methods of determining skeletal age which do not expose individuals to ionizing radiation are suitable for this purpose. The present study examines whether MRI diagnosis of the distal radial epiphysis can be utilised to monitor internationally relevant age limits in professional football. The wrist area of 152 male footballers aged 18 to 22 years belonging to regional clubs was prospectively examined using MRI. The ossification stage of the distal radial epiphysis was subsequently determined on the basis of established criteria used in determining the maturity of the medial clavicular epiphysis. For the first time, we ascertained evidence of an increase in the prevalence of the phenomenon of threefold linear stratification (hypointense line, hyperintense line, and hypointense line) in the representation of the fused epiphyseal plate of the radius using magnetic resonance imaging with increasing chronological age. Within our study population, test persons with an ossified epiphyseal plate without any verifiable epiphyseal scar were not represented. The presumably high minimum age of entry into this final stage of development (>22 years) must be verified in the course of further studies. According to the results of the present study, the fused epiphyseal plate of the distal radius provides potential maturation criteria which appear suitable for reliable monitoring of all relevant age limits in international football with the aid of magnetic resonance imaging. |
Role of pulse shape in cell membrane electropermeabilization. | The role of the amplitude, number, and duration of unipolar rectangular electric pulses in cell membrane electropermeabilization in vitro has been the subject of several studies. With respect to unipolar rectangular pulses, an improved efficiency has been reported for several modifications of the pulse shape: separate bipolar pulses, continuous bipolar waveforms, and sine-modulated pulses. In this paper, we present the results of a systematic study of the role of pulse shape in permeabilization, cell death, and molecular uptake. We have first compared the efficiency of 1-ms unipolar pulses with rise- and falltimes ranging from 2 to 100 micros, observing no statistically significant difference. We then compared the efficiency of triangular, sine, and rectangular bipolar pulses, and finally the efficiency of sine-modulated unipolar pulses with different percentages of modulation. We show that the results of these experiments can be explained on the basis of the time during which the pulse amplitude exceeds a certain critical value. |
Enterprise agility and the enabling role of information technology | Received: 18 May 2005 Revised: 19 June 2005 Accepted: 9 January 2006 Abstract In turbulent environments, enterprise agility, that is, the ability of firms to sense environmental change and respond readily, is an important determinant of firm success. We define and deconstruct enterprise agility, delineate enterprise agility from similar concepts in the business research literature, explore the underlying capabilities that support enterprise agility, explicate the enabling role of information technology (IT) and digital options, and propose a method for measuring enterprise agility. The concepts in this paper are offered as foundational building blocks for the overall research program on enterprise agility and the enabling role of IT. European Journal of Information Systems (2006) 15, 120–131. doi:10.1057/palgrave.ejis.3000600 |
A case of cutaneous Rosai-Dorfman disease (CRDD) with underlying calvarial involvement and absence of BRAFV600E mutation | CRDD: cutaneous Rosai-Dorfman disease LCH: Langerhans cell histiocytosis RDD: Rosai-Dorfman disease R osai-Dorfman disease (RDD) is a benign histiocytic proliferation that most commonly presents with painless bilateral lymphadenopathy and constitutional symptoms such as fever, fatigue, and night sweats. RDD is considered by many to be a reaction pattern with several different manifestations, especially as clonality has not been documented to support it representing a neoplasm per se. Classic histologic features include histiocytes that are S100 protein positive, are CD1a , and demonstrate emperipolesis. Cutaneous lesions can occur in about 10% of patients, however, RDD limited only to cutaneous involvement is particularly rare. Moreover, concomitant cutaneous RDD (CRDD) and bone RDD has rarely been reported in the English-language literature. Here, we presented a case of CRDD on the scalp with underlying bony involvement. |
The Global Spread of Neoliberalism and China's Pension Reform since 1978 | This article treats China’s pension reform as part of the global spread of neoliberalism, arguing that China’s pension reform was a process of the triumph of neoliberal models based on individual accounts. Chinese policymakers emulated or learned from the ILO social insurance in the 1980s, Singapore’s central provident funds until 1995, and the World Bank’s model since 1995, and the national forces dominated that process until 1995, when the World Bank established itself as the driving force. China’s pension reform has been far from successful, as shown in the difficulties in funding the individual accounts and the issue of fragmented coverage. But the neoliberal model will continue to exist, largely due to the fact that once adopted it is hard to abolish, and the continual compromises among policymakers. |
Light as a central modulator of circadian rhythms, sleep and affect | Light has profoundly influenced the evolution of life on earth. As widely appreciated, light enables us to generate images of our environment. However, light — through intrinsically photosensitive retinal ganglion cells (ipRGCs) — also influences behaviours that are essential for our health and quality of life but are independent of image formation. These include the synchronization of the circadian clock to the solar day, tracking of seasonal changes and the regulation of sleep. Irregular light environments lead to problems in circadian rhythms and sleep, which eventually cause mood and learning deficits. Recently, it was found that irregular light can also directly affect mood and learning without producing major disruptions in circadian rhythms and sleep. In this Review, we discuss the indirect and direct influence of light on mood and learning, and provide a model for how light, the circadian clock and sleep interact to influence mood and cognitive functions. |
CATCH: A detecting algorithm for coalition attacks of hit inflation in internet advertising | As the Internet flourishes, online advertising becomes essential for marketing campaigns for business applications. To perform a marketing campaign, advertisers provide their advertisements to Internet publishers and commissions are paid to the publishers of the advertisements based on the clicks made for the posted advertisements or the purchases of the products of which advertisements posted. Since the payment given to a publisher is proportional to the amount of clicks received for the advertisements posted by the publisher, dishonest publishers are motivated to inflate the number of clicks on the advertisements hosted on their web sites. Since the click frauds are critical for online advertising to be reliable, the online advertisers make the efforts to prevent them effectively. However, the methods used for click frauds are also becoming more complex and sophisticated. In this paper, we study the problem of detecting coalition attacks of click frauds. The coalition attacks of click fraud is one of the latest sophisticated techniques utilized for click frauds because the fraudsters can obtain not only more gain but also less probability of being detected by joining a coalition. We introduce new definitions for the coalition and propose the novel algorithm called CATCH to find such coalitions. Extensive experiments with synthetic and real-life data sets confirm that our notion of coalition allows us to detect coalitions much more effectively than that of |
Applications of Blockchain Technology beyond Cryptocurrency | The goal of this research paper is to summarise the literature on implementation of the Blockchain and similar digital ledger techniques in various other domains beyond its application to crypto-currency and to draw appropriate conclusions. Blockchain being relatively a new technology, a representative sample of research is presented, spanning over the last ten years, starting from the early work in this field. Different types of usage of Blockchain and other digital ledger techniques, their challenges, applications, security and privacy issues were investigated. Identifying the most propitious direction for future use of Blockchain beyond crypto-currency is the main focus of the review study. Blockchain (BC), the technology behind Bitcoin crypto-currency system, is considered to be essential for forming the backbone for ensuring enhanced security and privacy for various applications in many other domains including the Internet of Things (IoT) eco-system. International research is currently being conducted in both academia and industry applying Blockchain in varied domains. The Proof-of-Work (PoW) mathematical challenge ensures BC security by maintaining a digital ledger of transactions that is considered to be unalterable. Furthermore, BC uses a changeable |
MAPO: mining API usages from open source repositories | To improve software productivity, when constructing new software systems, developers often reuse existing class libraries or frameworks by invoking their APIs. Those APIs, however, are often complex and not well documented, posing barriers for developers to use them in new client code. To get familiar with how those APIs are used, developers may search the Web using a general search engine to find relevant documents or code examples. Developers can also use a source code search engine to search open source repositories for source files that use the same APIs. Nevertheless, the number of returned source files is often large. It is difficult for developers to learn API usages from a large number of returned results. In order to help developers understand API usages and write API client code more effectively, we have developed an API usage mining framework and its supporting tool called MAPO (for <u>M</u>ining <u>AP</u>I usages from <u>O</u>pen source repositories). Given a query that describes a method, class, or package for an API, MAPO leverages the existing source code search engines to gather relevant source files and conducts data mining. The mining leads to a short list of frequent API usages for developers to inspect. MAPO currently consists of five components: a code search engine, a source code analyzer, a sequence preprocessor, a frequent sequence miner, and a frequent sequence post processor. We have examined the effectiveness of MAPO using a set of various queries. The preliminary results show that the framework is practical for providing informative and succinct API usage patterns. |
A Survey on Applications of Augmented Reality | The term Augmented Reality (AR) refers to a set of technologies and devices able to enhance and improve human perception, thus bridging the gap between real and virtual space. Physical and artificial objects are mixed together in a hybrid space where the user can move without constraints. This mediated reality is spread in our everyday life: work, study, training, relaxation, time spent traveling are just some of the moments in which you can use AR applications. This paper aims to provide an overview of current technologies and future trends of augmented reality as well as to describe the main application domains, outlining benefits and open issues. |
Grading and Classification of Anthracnose Fungal Disease of Fruits based on Statistical Texture Features | In this paper, lesion areas affected by anthracnose are segmented using segmentation techniques, graded based on percentage of affected area and neural network classifier is used to classify normal and anthracnose affected on fruits. We have considered three types of fruit namely mango, grape and pomegranate for our work. The developed processing scheme consists of two phases. In the first phase, segmentation techniques namely thresholding, region growing, K-means clustering and watershed are employed for separating anthracnose affected lesion areas from normal area. Then these affected areas are graded by calculating the percentage of affected area. In the second phase texture features are extracted using Runlength Matrix. These features are then used for classification purpose using ANN classifier. We have conducted experimentation on a dataset of 600 fruits’ image samples. The classification accuracies for normal and affected anthracnose fruit types are 84.65% and 76.6% respectively. The work finds application in developing a machine vision system in horticulture field. |
When helping helps: autonomous motivation for prosocial behavior and its influence on well-being for the helper and recipient. | Self-determination theory posits that the degree to which a prosocial act is volitional or autonomous predicts its effect on well-being and that psychological need satisfaction mediates this relation. Four studies tested the impact of autonomous and controlled motivation for helping others on well-being and explored effects on other outcomes of helping for both helpers and recipients. Study 1 used a diary method to assess daily relations between prosocial behaviors and helper well-being and tested mediating effects of basic psychological need satisfaction. Study 2 examined the effect of choice on motivation and consequences of autonomous versus controlled helping using an experimental design. Study 3 examined the consequences of autonomous versus controlled helping for both helpers and recipients in a dyadic task. Finally, Study 4 manipulated motivation to predict helper and recipient outcomes. Findings support the idea that autonomous motivation for helping yields benefits for both helper and recipient through greater need satisfaction. Limitations and implications are discussed. |
Relative adrenal insufficiency in dogs with sepsis. | BACKGROUND
A syndrome of relative adrenal insufficiency has been identified in septic humans, and is associated with hypotension and death. Relative adrenal insufficiency is generally associated with basal serum cortisol concentration within or above the reference range and a blunted cortisol response to adrenocorticotropic hormone administration. It is unknown whether relative adrenal insufficiency occurs in septic dogs.
HYPOTHESIS
That relative adrenal insufficiency occurs in septic dogs, and that relative adrenal insufficiency is associated with hypotension and mortality.
ANIMALS
Thirty-three septic dogs admitted to a small animal intensive care unit.
METHODS
Dogs were included in the study if they had a known or suspected infectious disease and had systemic inflammatory response syndrome. Dogs were excluded if they had disease or medication history expected to affect the hypothalamic-pituitary-adrenal axis. Serum cortisol and endogenous plasma adrenocorticotropic hormone concentrations were measured before, and serum cortisol concentration measured 1 hour after, intramuscular administration of 250 microg of cosyntropin/dog. The change in cortisol concentration (delta-cortisol) before and after cosyntropin administration was determined in each dog.
RESULTS
Hypotension was associated with lower delta-cortisol values (OR 1.3; CI 1.0-1.9; P = .029). delta-Cortisol cutoff of 3.0 microg/dL was most accurate for predicting hypotension, survival to discharge, and 28-day survival. The rate of death in dogs with delta-cortisol < or = 3 microg/dL was 4.1 times that of dogs with delta-cortisol > 3 microg/dL (RR 4.1; CI 1.5-12.3; P = .01).
CONCLUSIONS AND CLINICAL RELEVANCE
Delta-cortisol < or = 3 microg/dL after adrenocorticotropic hormone administration is associated with systemic hypotension and decreased survival in septic dogs. |
A data-centric approach to synchronization | Concurrency-related errors, such as data races, are frustratingly difficult to track down and eliminate in large object-oriented programs. Traditional approaches to preventing data races rely on protecting instruction sequences with synchronization operations. Such control-centric approaches are inherently brittle, as the burden is on the programmer to ensure that all concurrently accessed memory locations are consistently protected. Data-centric synchronization is an alternative approach that offloads some of the work on the language implementation. Data-centric synchronization groups fields of objects into atomic sets to indicate that these fields must always be updated atomically. Each atomic set has associated units of work, that is, code fragments that preserve the consistency of that atomic set. Synchronization operations are added automatically by the compiler. We present an extension to the Java programming language that integrates annotations for data-centric concurrency control. The resulting language, called AJ, relies on a type system that enables separate compilation and supports atomic sets that span multiple objects and that also supports full encapsulation for more efficient code generation. We evaluate our proposal by refactoring classes from standard libraries, as well as a number of multithreaded benchmarks, to use atomic sets. Our results suggest that data-centric synchronization is easy to use and enjoys low annotation overhead, while successfully preventing data races. Moreover, experiments on the SPECjbb benchmark suggest that acceptable performance can be achieved with a modest amount of tuning. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.