title
stringlengths
8
300
abstract
stringlengths
0
10k
Still happy after all these years: research frontiers on subjective well-being in later life.
UNLABELLED OBJECTIVES. Understanding the factors that promote quality of life in old age has been a staple of social gerontology since its inception and remains a significant theme in aging research. The purpose of this article was to review the state of the science with regard to subjective well-being (SWB) in later life and to identify promising directions for future research. METHODS This article is based on a review of literature on SWB in aging, sociological, and psychological journals. Although the materials reviewed date back to the early 1960s, the emphasis is on publications in the past decade. RESULTS Research to date paints an effective portrait of the epidemiology of SWB in late life and the factors associated with it. Although the research base is large, causal inferences about the determinants of SWB remain problematic. Two recent contributions to the research base are highlighted as emerging issues: studies of secular trends in SWB and cross-national studies. Discussion. The review ends with discussion of priority issues for future research.
Renaissance configurations: Voices/bodies/spaces, 1580-1690
Preface: Renaissance Configurations G. McMullan - PART I: TURNING THE KEY - 'Infinite Riches in a Little Room': Marlowe and the Aesthetics of the Closet J. Knowles - Shakespeare 'Creepes into the Women's Closets about Bedtime': Women Reading in a Room of Their Own S. Roberts - 'A Book, and Solitariness': Melancholia Gender and Literary Subjectivity in Mary Wroth's Urania H. Hackett - PART II: DESIRING DIFFERENCE - Lyly and Lesbianism: Mysteries of the Closet in Sappho and Phao M. Pincombe - 'With Phoebus' Amorous Pinches Black': the Desirability of Difference in Early Modern Culture K Chedgzoy - A Rose for Emilia: Collaborative Relations in The Two Noble Kinsmen G. Kinsmen - PART III: NAMING/LOCATING - Space for the Self: Place, Persona, and Self-Projection in The Comedy of Errors and Pericles A. Piesse - Calling Things By Their Names': Troping Prostitution, Politics, and The Dutch Courtesan M. Thornton Burnett - PART IV: VOICING THE PAST - Spectres and Sisters: Mary Sidney and the 'Perennial Puzzle' of Renaissance Women's Writing S. Trill - What Echo Says in Seventeenth-Century Women's Poetry: Wroth, Behn S.J. Wiseman - Restoring the Renaissance: Margaret Cavendish and Katherine Philips R. Ballaster - Afterword A. Thompson
AWEsome: An open-source test platform for airborne wind energy systems
In this paper we present AWEsome (Airborne Wind Energy Standardized Open-source Model Environment), a test platform for airborne wind energy systems that consists of low-cost hardware and is entirely based on open-source software. It can hence be used without the need of large financial investments, in particular by research groups and startups to acquire first experiences in their flight operations, to test novel control strategies or technical designs, or for usage in public relations. Our system consists of a modified off-the-shelf model aircraft that is controlled by the pixhawk autopilot hardware and the ardupilot software for fixed wing aircraft. The aircraft is attached to the ground by a tether. We have implemented new flight modes for the autonomous tethered flight of the aircraft along periodic patterns. We present the principal functionality of our algorithms. We report on first successful tests of these modes in real flights. ∗Author names are in alphabetical order. a Universität Bonn, [email protected] b Universität Bonn, [email protected] c Humboldt-Universität zu Berlin, [email protected] d Daidalos Capital, [email protected] awesome.physik.uni-bonn.de 1 ar X iv :1 70 4. 08 69 5v 1 [ cs .S Y ] 2 7 A pr 2 01 7 Section
Does Ti+4 Ratio Improve the Physical Properties of CdxCo1−x+tTitFe2−2tO4?
The microstructure and magnetic properties of Ti substituted CoCd ferrites of the general formula CdxCo1−x+tTitFe2−2tO4, x = 0.20, 0.00⩽t⩽0.25 have been reported. The ferrite samples were prepared by standard double sintering ceramic technique and structural analysis was carried out using X‐ray diffraction. The spinel structure is confirmed for all concentration. Some physical properties (such as lattice parameter, density and porosity) have been also calculated. Temperature and magnetic field dependence of susceptibility is illustrated for all Ti contents. Both experimental and theoretical values of the effective magnetic moment were increased with increasing Ti content. The Curie temperature increases with the addition of Ti up to t = 0.15 after which it decreases. The effect of mechanical pressure on the dc resistivity at room temperature enhances the use of the samples in some applications.
Catalytic site inhibition of insulin-degrading enzyme by a small molecule induces glucose intolerance in mice
Insulin-degrading enzyme (IDE) is a protease that cleaves insulin and other bioactive peptides such as amyloid-β. Knockout and genetic studies have linked IDE to Alzheimer's disease and type-2 diabetes. As the major insulin-degrading protease, IDE is a candidate drug target in diabetes. Here we have used kinetic target-guided synthesis to design the first catalytic site inhibitor of IDE suitable for in vivo studies (BDM44768). Crystallographic and small angle X-ray scattering analyses show that it locks IDE in a closed conformation. Among a panel of metalloproteases, BDM44768 selectively inhibits IDE. Acute treatment of mice with BDM44768 increases insulin signalling and surprisingly impairs glucose tolerance in an IDE-dependent manner. These results confirm that IDE is involved in pathways that modulate short-term glucose homeostasis, but casts doubt on the general usefulness of the inhibition of IDE catalytic activity to treat diabetes.
Classical fear conditioning in functional neuroimaging
Classical conditioning, the simplest form of associative learning, is one of the most studied paradigms in behavioural psychology. Since the formal description of classical conditioning by Pavlov, lesion studies in animals have identified a number of anatomical structures involved in, and necessary for, classical conditioning. In the 1980s, with the advent of functional brain imaging techniques, particularly positron emission tomography (PET), it has been possible to study the functional anatomy of classical conditioning in humans. The development of functional magnetic resonance imaging (fMRI)--in particular single-trial or event-related fMRI--has now considerably advanced the potential of neuroimaging for the study of this form of learning. Recent event-related fMRI and PET studies are adding crucial data to the current discussion about the putative role of the amygdala in classical fear conditioning in humans.
Leveraging RF-channel fluctuation for activity recognition: Active and passive systems, continuous and RSSI-based signal features
We consider the recognition of activities from passive entities by analysing radio-frequency (RF)-channel fluctuation. In particular, we focus on the recognition of activities by active Software-defined-radio (SDR)-based Device-free Activity Recognition (DFAR) systems and investigate the localisation of activities performed, the generalisation of features for alternative environments and the distinction between walking speeds. Furthermore, we conduct case studies for Received Signal Strength (RSS)-based active and continuous signal-based passive systems to exploit the accuracy decrease in these related cases. All systems are compared to an accelerometer-based recognition system.
Spinors in Quantum Geometrical Theory
Spinors have played an essential but enigmatic role in modern physics since their discovery. Now that quantum-gravitational theories have started to become available, the inclusion of a description of spin in the development is natural and may bring about a profound understanding of the mathematical structure of fundamental physics. A program to attempt this is laid out here. Concepts from a known quantum-geometrical theory are reviewed: (1) Classical physics is replaced by a suitable geometry as a fundamental starting point for quantum mechanics. (2) In this context, a resolution is found for the enigma of wave-particle duality. (3) It is shown how to couple the quantum density to the geometrical density. (4) The mechanical gauge is introduced to allow dimensional reduction. (5) Absolute geometrical equivalence is enforced. The concordant five-dimensional quantum-geometrical theory is summarized to provide an orderly basis for the introduction of spinors. It is supposed that the Pauli--Dirac theory is adaptable. A search is begun for a description that will generate spinors as a natural tangent space. Interactions other than gravity and electrodynamics should then appear intrinsically. These are conjectured to be weak effects for electrons.
Learner-Centeredness and EFL Instruction in Vietnam: A Case Study.
Although learner-centeredness has been widely applied in instruction in the world, this approach has only been cautiously adopted in English as a Foreign Language (EFL) teaching at some institutions in Vietnam. Taking a social constructivist view, this case study explores how a learner-centred perspective is employed in EFL teaching at a teacher training college in Vietnam. The study is based on data generated with EFL teachers and students of an advanced level class through classroom observations, in-depth interviews, group discussions and document reviews. The data have been qualitatively analysed to show how learner-centeredness is successfully employed to get the students actively involved in learning. Implications are drawn in regard to EFL teaching and learning, and also curriculum and materials development.
Chapter 6.1 Paleoarchean Gneisses in the Minnesota River Valley and Northern Michigan, USA
Publisher Summary This chapter elaborates the Paleoarchean gneisses in the Minnesota river valley and northern Michigan, USA. Meso- to Paleoarchean gneisses occur along the southern margin of the Neoarchean Superior Craton. The most extensive exposure of these rocks is in the Minnesota River Valley (MRV) of southwestern Minnesota, but there are also exposures in northern Michigan. Aeromagnetic mapping of southwestern Minnesota, and detailed gravity and magnetic modeling within the MRV have delineated four crustal blocks in the MRV that are bounded by three east-northeast-trending geophysical anomalies that roughly parallel the Morris fault segment. Most of the work in the MRV has concentrated on exposures in the Montevideo block and the Morton block, which are separated by the Yellow Medicine shear zone. The data indicate that major common tectonothermal events, recognized in the zircon geochronology, occurred in both the Morton and the Montevideo blocks. A mafic intrusion in the granite gneiss at Granite Falls has a well-constrained age of 3140 Ma, indicating another intrusive magmatic event at that time, and zircon overgrowths of this age are also seen in zircons from the Morton Gneiss.
Sustaining and broadening intervention impact: a longitudinal randomized trial of 3 adolescent risk reduction approaches.
OBJECTIVE To determine whether the addition of a parental monitoring intervention (Informed Parents and Children Together [ImPACT]) alone or with "boosters" could enhance (either broaden or sustain or both) the effect of a small group, face-to-face adolescent risk reduction intervention Focus on Kids (FOK). METHODS A longitudinal, randomized, community-based cohort study was conducted of 35 low-income, community-based, in-town settings. A total of 817 black youths aged 12 to 16 years at baseline were studied. After completion of baseline measures, youths were randomized to receive a face-to-face intervention alone (FOK only), a face-to-face intervention and a parental monitoring intervention (FOK plus ImPACT), or both of the above plus boosters (FOK plus ImPACT plus boosters). Risk and protective behaviors were assessed at 6 and 12 months after intervention. RESULTS At 6 months' follow-up, youths in families that were assigned to FOK plus ImPACT reported significantly lower rates of sexual intercourse, sex without a condom, alcohol use, and cigarette use and marginally lower rates of "risky sexual behavior" compared with youths in families that were assigned to FOK only. At 12 months after intervention, rates of alcohol and marijuana use were significantly lower and cigarette use and overall risk intention were marginally lower among FOK plus ImPACT youths compared with FOK only youths. With regard to the boosters delivered at 7 and 10 months, 2 risk behaviors--use of crack/cocaine and drug selling--were significantly lower among the youths who were assigned to receive the additional boosters compared with youths without the boosters. The rates of the other risk behaviors and intentions did not differ significantly. CONCLUSIONS The results of this randomized, controlled trial indicate that the inclusion of a parental monitoring intervention affords additional protection from involvement in adolescent risk behaviors 6 and 12 months later compared with the provision of an intervention that targets adolescents only. At the same time, the results of the present study do not provide sufficient evidence that booster sessions further improve targeted behaviors enough to include them in a combined parent and youth intervention.
Risk factors for oral cancer in northeast Thailand.
Oral cancer is a common site of head and neck cancer, and is relatively frequent in Northeast Thailand. The objective of this hospital-based, case-control study was to determine associations with risk factors. A total of 104 oral cancer cases diagnosed between July 2010 and April 2011 in 3 hospitals were matched with control subjects by age, sex and hospital. Data were collected by personal interview. There were significant associations between oral cancer and tobacco smoking (OR=4.47; 95%CI=2.00 to 9.99), alcohol use among women (OR=4.16; 95%CI=1.70 to 10.69), and betel chewing (OR=9.01; 95%CI=3.83 to 21.22), and all three showed dose-response effects. Smoking is rare among Thai women (none of the control women were smokers), but betel chewing, especially among older women, is relatively common. We did not find any association between practicing oral sex and oral cancer.
Climate-induced changes to the ancestral population size of two Patagonian galaxiids: the influence of glacial cycling.
Patagonia is one of the few areas in the Southern Hemisphere to have been directly influenced by Quaternary glaciers. In this study, we evaluate the influence that Quaternary glacial ice had on the genetic diversity of two congeneric fish species, the diadromous Galaxias maculatus and the nondiadromous Galaxias platei, using multilocus estimates of effective population size through time. Mid-Quaternary glaciations had far-reaching consequences for both species. Galaxias maculatus and G. platei each experienced severe genetic bottlenecks during the period when Patagonia ice sheet advance reached its maximum positions c. 1.1-0.6 Ma. Concordant drops in effective size during this time suggest that range sizes were under similar constraints. It is therefore unlikely that coastal (brackish/marine) environments served as a significant refuge for G. maculatus during glacial periods. An earlier onset of population declines for G. platei suggests that this species was vulnerable to modest glacial advances. Declines in effective sizes were continuous for both species and lasted into the late-Pleistocene. However, G. maculatus exhibited a strong population recovery during the late-Quaternary (c. 400,000 bp). Unusually long and warm interglacials associated with the late-Quaternary may have helped to facilitate a strong population rebound in this primarily coastal species.
Predictive control approach to autonomous vehicle steering
A model predictive control (MPC) approach to active steering is presented for autonomous vehicle systems. The controller is designed to stabilize a vehicle along a desired path while rejecting wind gusts and fulfilling its physical constraints. Simulation results of a side wind rejection scenario and a double lane change maneuver on slippery surfaces show the benefits of the systematic control methodology used. A trade-off between the vehicle speed and the required preview on the desired path for vehicle stabilization is highlighted
lipOfibrOMaTOuS haMarTOMa : a rEviEw artiClE
Synonyms, Key Words, and Related Terms Lipofibromatous Hamartoma, fibrolipomatous hamartoma, intraneural hamartoma, neural fibrolipomatosis, neural fibrolipoma, extraneural fibromas, neurofibromatosis, lipomas, intraneural lipomas, macrodystrophia lipomatosas, KlippelTrenaunay-Weber syndrome, congenital lymphedema, hypertrophic mononeuritis, hereditary hypertrophic interstitial neuritis of Dejerine-Sottas syndrome. INTRODUCTION Lipofibromatous hamartoma (LFH) is a rare, fibrofatty benign tumor comprised of proliferation of mature adipoctyes within peripheral nerves forming a palapable neurogenic mass. Although LFH was first described in English literature in 1953, there are fewer than 60 documented cases in recent medical literature.1 It affects the median nerve in 66 to 80% of cases, causing pain, sensory and motor deficits in the affected nerve distribution. In the late 1950’s and into the next decade, a number of authors reported cases of extraneural fibromas causing compression neuropathy of peripheral nerves; however, they were yet to be described in relation to one another.2,3,4,5,6,7,8,9 In 1969, Johnson and Bonfigilo coined the term “lipofibromatous hamartoma”, accurately describing the entity and its relation to carpal tunnel syndrome (CTS).10 While there is an unexplained predilection for the median nerve, cases of fatty infiltration of the brachial plexus, ulnar, radial, peroneal and plantar nerves have also been reported.11,14 To date, there are several terms used to describe this condition including fibrolipomatous hamartoma, intraneural hamartoma, neural fibrolipomatosis and neuralfibrolipoma. The differential diagnosis includes ganglion cysts, vascular malformations, traumatic neuroma and lipomas.13 In 1994, Guthikonda et al. described four types of lipomatous masses which vary depending on their location within the parent nerve: soft tissue lipomas, intraneural lipomas, macrodystrophia lipomatosas and lipofibromatous hamartomas.13 clinical prObleM Patients typically present with gradually enlarging nontender lesions in the distribution of the affected nerve. Since LFH often involves the median nerve, the presentation of median nerve LFH shares considerable overlap with carpal tunnel syndrome. Affected individuals complain of numbness and tingling along the volar aspect of the wrist and hand. Motor deficits are a late finding. frequency Congenital origin of LFH with or without macrodactyly has been previously suggested, but results have been mixed. Most cases occur within the first three decades of life, with the mean age of 22.3 in isolated cases and 22.0 in cases with macrodactyly.14 Silverman and Enzinger reported 26 cases of upper and lower extremity LFH, 7 with macrodactyly and 19 without. Combining their work and subsequent studies, it was determined that there is a 2:1 female to male ratio of cases with macrodactyly and a 1:1 ratio in cases without.14,15 Complicating the scenario even further is the considerable overlap with Klippel-Trenaunay-Weber syndrome, congenital lymphedema, hypertrophic mononeuritis, and hereditary hypertrophic interstitial neuritis of Dejerine-Sottas.10 eTiOlOgy Although there have been suggestions of a congenital origin to LFH, the etiology remains unclear. Cases arising from post-traumatic incidences have been reported, all showing the characteristic fatty infiltrate on biopsy. The pathophysiology of LFH is unknown. INDICATIONS Indications for surgical intervention vary case-by-case. Due to the intimate nerve involvement, LFH is often accompanied by a degree of neurologic morbidity. If the risk of nerve damage is low and nerve involvement is minimal, surgical debulking for cosmetic reasons can be undertaken. However, in the face of advanced nerve involvement, indications for intervention are progressive and unrelenting neurological deficits. LFH most commonly affects the median nerve in 66 to 80% of cases, causing pain, sensory and motor deficits in the affected nerve distribution.2 There have also been cases of LFH affecting the brachial plexus, ulnar, radial, peroneal and plantar nerves.11,14 There is no explanation of why the median nerve is most commonly affected. A fundamental knowledge of the anatomical distribution of nerves helps distinguish LFH from Terrill P. Julien, MD, PGY-2 Resident, Harvard Combined Orthopaedic Surgery Residency Program
Avoidable hospitalizations for diabetes: comorbidity risks.
This study examined the risk for avoidable diabetes hospitalizations associated with comorbid conditions and other risk variables. A retrospective analysis was conducted of hospitalizations with a primary diagnosis of diabetes in a 2004 sample of short stay general hospitals in the United States (N = 97,526.) Data were drawn from the Health Care Utilization Project National Inpatient Sample. Avoidable hospitalizations were defined using criteria from the Agency for Healthcare Research and Quality to analyze 2 types of ambulatory care sensitive conditions (ACSCs): short-term complications and uncontrolled diabetes. Maternal cases, patients younger than age 18, and transfers from other hospitals were excluded. Avoidable hospitalization was estimated using maximum likelihood logistic regression analysis, where independent variables included patient age, gender, comorbidities, uninsurance status, patient's rural-urban residence and income estimate, and hospital variables. Models were identified using multiple runs on 3 random quartiles and validated using the fourth quartile. Costs were estimated from charge data using cost-to-charge ratios. Results indicated that these 2 ACSCs accounted for 35,312 or 36% of all diabetes hospitalizations. Multiple types of comorbid conditions were related to risk for avoidable diabetes hospitalizations. Estimated costs and length of stay were lower among these types of avoidable hospitalizations compared to other diabetes hospitalizations; however, total estimated nationwide costs for 2004 short-term complications and uncontrolled diabetes hospitalizations totaled over $1.3 billion. Recommendations are made for how disease management programs for diabetes could incorporate treatment for comorbid conditions to reduce hospitalization risk.
Normative theories of argumentation: are some norms better than others?
Norms—that is, specifications of what we ought to do—play a critical role in the study of informal argumentation, as they do in studies of judgment, decision-making and reasoning more generally. Specifically, they guide a recurring theme: are people rational? Though rules and standards have been central to the study of reasoning, and behavior more generally, there has been little discussion within psychology about why (or indeed if) they should be considered normative despite the considerable philosophical literature that bears on this topic. In the current paper, we ask what makes something a norm, with consideration both of norms in general and a specific example: norms for informal argumentation. We conclude that it is both possible and desirable to invoke norms for rational argument, and that a Bayesian approach provides solid normative principles with which to do so.
Visualizing and Understanding Deep Texture Representations
A number of recent approaches have used deep convolutional neural networks (CNNs) to build texture representations. Nevertheless, it is still unclear how these models represent texture and invariances to categorical variations. This work conducts a systematic evaluation of recent CNN-based texture descriptors for recognition and attempts to understand the nature of invariances captured by these representations. First we show that the recently proposed bilinear CNN model [25] is an excellent generalpurpose texture descriptor and compares favorably to other CNN-based descriptors on various texture and scene recognition benchmarks. The model is translationally invariant and obtains better accuracy on the ImageNet dataset without requiring spatial jittering of data compared to corresponding models trained with spatial jittering. Based on recent work [13, 28] we propose a technique to visualize pre-images, providing a means for understanding categorical properties that are captured by these representations. Finally, we show preliminary results on how a unified parametric model of texture analysis and synthesis can be used for attribute-based image manipulation, e.g. to make an image more swirly, honeycombed, or knitted. The source code and additional visualizations are available at http://vis-www.cs.umass.edu/texture.
Cognitive Reflection and Decision Making
P eople with higher cognitive ability (or “IQ”) differ from those with lower cognitive ability in a variety of important and unimportant ways. On average, they live longer, earn more, have larger working memories, faster reaction times and are more susceptible to visual illusions (Jensen, 1998). Despite the diversity of phenomena related to IQ, few have attempted to understand—or even describe—its influences on judgment and decision making. Studies on time preference, risk preference, probability weighting, ambiguity aversion, endowment effects, anchoring and other widely researched topics rarely make any reference to the possible effects of cognitive abilities (or cognitive traits). Decision researchers may neglect cognitive ability because they are more interested in the average effect of some experimental manipulation. On this view, individual differences (in intelligence or anything else) are regarded as a nuisance—as just another source of “unexplained” variance. Second, most studies are conducted on college undergraduates, who are widely perceived as fairly homogenous. Third, characterizing performance differences on cognitive tasks requires terms (“IQ” and “aptitudes” and such) that many object to because of their association with discriminatory policies. In short, researchers may be reluctant to study something they do not find interesting, that is not perceived to vary much within the subject pool conveniently obtained, and that will just get them into trouble anyway. But as Lubinski and Humphreys (1997) note, a neglected aspect does not cease to operate because it is neglected, and there is no good reason for ignoring the possibility that general intelligence or various more specific cognitive abilities are important causal determinants of decision making. To provoke interest in this
Application of support vector machines for T-cell epitopes prediction
MOTIVATION The T-cell receptor, a major histocompatibility complex (MHC) molecule, and a bound antigenic peptide, play major roles in the process of antigen-specific T-cell activation. T-cell recognition was long considered exquisitely specific. Recent data also indicate that it is highly flexible, and one receptor may recognize thousands of different peptides. Deciphering the patterns of peptides that elicit a MHC restricted T-cell response is critical for vaccine development. RESULTS For the first time we develop a support vector machine (SVM) for T-cell epitope prediction with an MHC type I restricted T-cell clone. Using cross-validation, we demonstrate that SVMs can be trained on relatively small data sets to provide prediction more accurate than those based on previously published methods or on MHC binding. SUPPLEMENTARY INFORMATION Data for 203 synthesized peptides is available at http://linus.nci.nih.gov/Data/LAU203_Peptide.pdf
An Experimental Evaluation of Point-of-interest Recommendation in Location-based Social Networks
Point-of-interest (POI) recommendation is an important service to Location-Based Social Networks (LBSNs) that can benefit both users and businesses. In recent years, a number of POI recommender systems have been proposed, but there is still a lack of systematical comparison thereof. In this paper, we provide an allaround evaluation of 12 state-of-the-art POI recommendation models. From the evaluation, we obtain several important findings, based on which we can better understand and utilize POI recommendation models in various scenarios. We anticipate this work to provide readers with an overall picture of the cutting-edge research on POI recommendation.
Tutorial on NoSQL Databases
NoSQL databases are the new breed of databases developed to overcome the drawbacks of RDBMS. The goal of NoSQL is to provide scalability, availability and meet other requirements of cloud computing. The common motivation of NoSQL design is to meet scalability and fail over. In most of the NoSQL database systems, data is partitioned and replicated across multiple nodes. Inherently, most of them use either Google's MapReduce or Hadoop Distributed File System or Hadoop MapReduce for data collection. Cassandra, HBase and MongoDB are mostly used and they can be termed as the representative of NoSQL world. This tutorial discusses the features of NoSQL databases in the light of CAP theorem.
Miniaturized microstrip slotted holy shaped patch structure for high frequency applications
The three stage design of a microstrip slotted holy shaped patch structure intended to serve high frequency applications in the frequency range between 19.52 GHz to 31.5 GHz is proposed in this paper. The geometrical stages use FR4 epoxy substrate with small dimensions of 10 mm × 8.7 mm × 1.6 mm and employ coaxial feeding technique. An analysis of the three design stages has been done over HFSS-15to obtain the corresponding reflection coefficient, bandwidth, radiation pattern, gain and VSWR. The graphical as well as tabulated comparison of the standard parameters has been included in the results section.
The impact of QTL allele frequency distribution on the accuracy of genomic prediction
The accuracy of genomic prediction of quantitative traits based on single nucleotide polymorphism (SNP) markers depends among other factors on the allele frequency distribution of quantitative trait loci (QTL). Therefore, the aim of this study was to investigate different QTL allele frequency distributions and their effect on the accuracy of genomic estimated breeding values (GEBVs) using best linear unbiased genomic prediction (GBLUP) in simulated data. A population of 1000 individuals composed of 500 males and 500 females as well as a genome of 1000 cM consisting of 10 chromosomes and with a mutation rate of 2.5× 10−5 per locus was simulated. QTL frequencies were derived from five distributions of allele frequency including constant, uniform, U-shaped, L-shaped and minor allele frequency (MAF) less than 0.01 (lowMAF). QTL effects were generated from a standard normal distribution. The number of QTL was assumed to be 500, and the simulation was done in 10 replications. The genomic prediction accuracy in the first-validation generation in constant, and the uniform allele frequency distribution was 0.59 and 0.57, respectively. Results showed that the highest accuracy of GEBVs was obtained with constant and uniform distributions followed by L-shaped, U-shaped and lowMAF QTL allele frequency distribution. The regression of true breeding values on predicted breeding values in the first-validation generation was 0.94, 0.92, 0.88, 0.85 and 0.75 for constant, uniform, L-shaped, U-shaped and lowMAF distributions, respectively. Depite different values of regression coefficients, in all scenarios GEBVs are biased downward. Overall, results showed that when QTL had a lower MAF relative to SNP markers, a low linkage disequilibrium (LD) was observed, which had a negative effect on the accuracy of GEBVs. Hence, the effect of the QTL allele frequency distribution on prediction accuracy can be alleviated through using a genomic relationship weighted by MAF or an LD-adjusted relationship matrix.
Improved adaptive sparse channel estimation based on the least mean square algorithm
Least mean square (LMS) based adaptive algorithms have been attracted much attention since their low computational complexity and robust recovery capability. To exploit the channel sparsity, LMS-based adaptive sparse channel estimation methods, e.g., zero-attracting LMS (ZA-LMS), reweighted zero-attracting LMS (RZA-LMS) and Lp - norm sparse LMS (LP-LMS), have also been proposed. To take full advantage of channel sparsity, in this paper, we propose several improved adaptive sparse channel estimation methods using Lp -norm normalized LMS (LP-NLMS) and L0 -norm normalized LMS (L0-NLMS). Comparing with previous methods, effectiveness of the proposed methods is confirmed by computer simulations.
Analyzing Big Data in Psychology: A Split/Analyze/Meta-Analyze Approach
Big data is a field that has traditionally been dominated by disciplines such as computer science and business, where mainly data-driven analyses have been performed. Psychology, a discipline in which a strong emphasis is placed on behavioral theories and empirical research, has the potential to contribute greatly to the big data movement. However, one challenge to psychologists-and probably the most crucial one-is that most researchers may not have the necessary programming and computational skills to analyze big data. In this study we argue that psychologists can also conduct big data research and that, rather than trying to acquire new programming and computational skills, they should focus on their strengths, such as performing psychometric analyses and testing theories using multivariate analyses to explain phenomena. We propose a split/analyze/meta-analyze approach that allows psychologists to easily analyze big data. Two real datasets are used to demonstrate the proposed procedures in R. A new research agenda related to the analysis of big data in psychology is outlined at the end of the study.
Élites y ciudadanía societaria: una teoría relacional del pluralismo postmoderno
From the social sciences perspective, the study of elites has concerned the political system. Pareto’s elite theory and social differentiation converge to the «relational theory of society». The article proves the hypothesis that in modern democratic societies, elites are not only differentiated but new elites emerge, these new elites can be called «societal elites».
Description of the YaCy Distributed Web Search Engine
Distributed web search engines have been proposed to mitigate the privacy issues that arise in centralized search systems. These issues include censorship, disclosure of sensitive queries to the search server and to any third parties with whom the search service operator might share data, as well as the lack of transparency of proprietary search algorithms. YaCy is a deployed distributed search engine that aims to provide censorship resistance and privacy to its users. Its user base has been steadily increasing and it is currently being used by several hundreds of people every day. Unfortunately, there exists no document that thoroughly describes how YaCy exactly works. We therefore investigated the source code of YaCy and summarize in this document our findings and explanation on YaCy. We confirmed with the YaCy community that our description on YaCy is accurate.
Accuracy of a pregnancy-associated glycoprotein ELISA to determine pregnancy status of lactating dairy cows twenty-seven days after timed artificial insemination.
To determine the accuracy of a pregnancy-associated glycoprotein (PAG) ELISA in identifying pregnancy status 27 d after timed artificial insemination (TAI), blood samples were collected from lactating Holstein cows (n = 1,079) 27 d after their first, second, and third postpartum TAI services. Pregnancy diagnosis by transrectal ultrasonography (TU) was performed immediately after blood sample collection, and pregnancy outcomes by TU served as a standard to test the accuracy of the PAG ELISA. Pregnancy outcomes based on the PAG ELISA and TU that agreed were considered correct, whereas the pregnancy status of cows in which pregnancy outcomes between PAG and TU disagreed were reassessed by TU 5 d later. The accuracy of pregnancy diagnosis was less than expected when using TU 27 d after TAI (93.7 to 97.8%), especially when pregnancy outcomes were based on visualization of chorioallantoic fluid and a corpus luteum but when an embryo was not visualized. The accuracy of PAG ELISA outcomes 27 d after TAI was 93.7, 95.4, and 96.2% for first, second, and third postpartum TAI services, respectively. Statistical agreement (kappa) between TU and the PAG ELISA 27 d after TAI was 0.87 to 0.90. Pregnancy outcomes based on the PAG ELISA had a high negative predictive value, indicating that the probability of incorrectly administering PGF(2alpha) to pregnant cows would be low if this test were implemented on a commercial dairy.
From archive to corpus: transcription and annotation in the creation of signed language corpora
The essential characteristic of a signed language corpus is that it has been annotated, and not, contrary to the practice of many signed language researchers, that it has been transcribed. Annotations are necessary for corpus-based investigations of signed or spoken languages. Multi-media annotation software can now be used to transform a recording into a machine-readable text without it first being necessary to transcribe the text, provided that linguistic units are uniquely identified and annotations subsequently appended to these units. These unique identifiers are here referred to as ID-glosses. The use of ID-glosses is only possible if a reference lexical database (i.e., dictionary) exists as the result of prior foundation research into the lexicon. In short, the creators of signed language corpora should prioritize annotation above transcription, and ensure that signs are identified using unique gloss-based annotations. Without this the whole rationale for corpus-creation is undermined.
Towards cybernetic surgery: robotic and augmented reality-assisted liver segmentectomy
Augmented reality (AR) in surgery consists in the fusion of synthetic computer-generated images (3D virtual model) obtained from medical imaging preoperative workup and real-time patient images in order to visualize unapparent anatomical details. The 3D model could be used for a preoperative planning of the procedure. The potential of AR navigation as a tool to improve safety of the surgical dissection is outlined for robotic hepatectomy. Three patients underwent a fully robotic and AR-assisted hepatic segmentectomy. The 3D virtual anatomical model was obtained using a thoracoabdominal CT scan with a customary software (VR-RENDER®, IRCAD). The model was then processed using a VR-RENDER® plug-in application, the Virtual Surgical Planning (VSP®, IRCAD), to delineate surgical resection planes including the elective ligature of vascular structures. Deformations associated with pneumoperitoneum were also simulated. The virtual model was superimposed to the operative field. A computer scientist manually registered virtual and real images using a video mixer (MX 70; Panasonic, Secaucus, NJ) in real time. Two totally robotic AR segmentectomy V and one segmentectomy VI were performed. AR allowed for the precise and safe recognition of all major vascular structures during the procedure. Total time required to obtain AR was 8 min (range 6–10 min). Each registration (alignment of the vascular anatomy) required a few seconds. Hepatic pedicle clamping was never performed. At the end of the procedure, the remnant liver was correctly vascularized. Resection margins were negative in all cases. The postoperative period was uneventful without perioperative transfusion. AR is a valuable navigation tool which may enhance the ability to achieve safe surgical resection during robotic hepatectomy.
A Sub-1V 32nA Process, Voltage and Temperature Invariant Voltage Reference Circuit
This paper presents a novel process, voltage and temperature (PVT) invariant voltage reference generator using subthreshold MOSFETS. The proposed circuit uses weighted average of PTAT and CTAT voltages at zero temperature co-efficient point. The proposed circuit has been designed and optimized in 180nm mixed-mode CMOS technology. Simulation results show that the output voltage of the proposed voltage reference generator varies by only ± 0.85% across process corners and temperature range of 0°C to 100°C. Temperature and power supply sensitivity of the reference voltage is 135ppm/°C and 0.54%/V, respectively. The proposed circuit consumes only 19 nW DC power and operates at supply voltages as low as 600 mV.
Hydrogels for tissue engineering: scaffold design variables and applications.
Polymer scaffolds have many different functions in the field of tissue engineering. They are applied as space filling agents, as delivery vehicles for bioactive molecules, and as three-dimensional structures that organize cells and present stimuli to direct the formation of a desired tissue. Much of the success of scaffolds in these roles hinges on finding an appropriate material to address the critical physical, mass transport, and biological design variables inherent to each application. Hydrogels are an appealing scaffold material because they are structurally similar to the extracellular matrix of many tissues, can often be processed under relatively mild conditions, and may be delivered in a minimally invasive manner. Consequently, hydrogels have been utilized as scaffold materials for drug and growth factor delivery, engineering tissue replacements, and a variety of other applications.
ANOSIP: anonymizing the SIP protocol
Enhancing anonymity in the Session Initiation Protocol (SIP) is much more than sealing participants' identities. It requires methods to unlink the communication parties and relax their proximity identification. These requirements should be fulfilled under several prerequisites, such as time limitation for session establishment, involvement of several functional entities for session management, inter-domain communications and support of streaming services when the session is established. In this paper we propose the usage of a privacy enhancement framework, called Mist, as a solution to the anonymity issue in SIP. For achieving anonymity, the original Mist architecture was modified to be adapted in the SIP framework. We evaluate the adapted Mist framework to SIP and measure how efficiently it supports anonymity features.
Pegylated interferon alpha-2b in patients with acute hepatitis C.
Kamal SM, Fouly AE, Kamel RR, Hockenjos B, Al Tawil A, Khalifa KE, He Q, Koziel MJ, El Naggar KM, Rasenack J, Afdhal NH. (Division of Gastroenterology and Liver Disease Center, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, MA, USA; Department of Gastroenterology and Liver Diseases, Ain Shams University, Cairo, Egypt; Department of Gastroenterology and Hepatology, University of Freiburg, Germany.) Peginterferon alfa-2b therapy in acute hepatitis C: Impact of onset of therapy on sustained virologic response. Gastroenterology 2006;130:632–8.
ONLINE MARKETING AND CONSUMER PURCHASE BEHAVIOUR : A STUDY OF NIGERIAN FIRMS
Businesses are spending more on and partaking in online marketing than ever before, the world over. Understanding the consumer behavioural factors that influence emarketing effectiveness is crucial. While some researchers have addressed this issue, few studies draw their conclusions focusing on the customers’ angle. More also is the fact that the study of the developing countries in this regards have been lesser than expected. The work seeks to validate empirically, while analyzing Nigeria firm engaging in internet marketing, the impact of the same on consumers’ purchase behaviour. We seek to understand to what extent the functionality of the infrastructure of the internet and the internet security issues impact consumers’ decision to eventually purchase. The survey research used a structured questionnaire to elicit data from selected firms in Lagos State, Nigeria. A reliable Cronbach’s Alpha was used to determine the reliability of the questionnaire. The data was analyzed using simple regression while the hypotheses drawn were tested. The findings show that online marketing has impacted consumer purchase decisions in Nigeria firms. There is a significant relationship between consumer purchase decisions and infrastructure of the internet in Nigeria. There also exists relationship between internet security and consumer purchase behaviour. These simply imply that one variable influences the other.
High-Throughput and Language-Agnostic Entity Disambiguation and Linking on User Generated Data
The Entity Disambiguation and Linking (EDL) task matches entity mentions in text to a unique Knowledge Base (KB) identifier such as a Wikipedia or Freebase id. It plays a critical role in the construction of a high quality information network, and can be further leveraged for a variety of information retrieval and NLP tasks such as text categorization and document tagging. EDL is a complex and challenging problem due to ambiguity of the mentions and real world text being multi-lingual. Moreover, EDL systems need to have high throughput and should be lightweight in order to scale to large datasets and run on off-the-shelf machines. More importantly, these systems need to be able to extract and disambiguate dense annotations from the data in order to enable an Information Retrieval or Extraction task running on the data to be more efficient and accurate. In order to address all these challenges, we present the Lithium EDL system and algorithm a high-throughput, lightweight, language-agnostic EDL system that extracts and correctly disambiguates 75% more entities than state-of-the-art EDL systems and is significantly faster than them.
Face Recognition Based on Image Enhancement and Gabor Features
Variations in lighting conditions, pose and expression make face recognition an even more challenging and difficult task. This paper presents a face recognition approach by using image enhancement and Gabor wavelets transformation. In face recognition, image preprocessing is the key step since it is important to features extract and recognition. Logarithm transformation and normalization are performed in face images captured under various lighting conditions for face recognition. This involves convolving a face image with a series of Gabor wavelets at different scales, locations, and orientations and extracting features from Gabor filtered images. Significant improvements are also observed when the preprocessing and Gabor filtered images are used for feature extraction instead of the original images. The approach achieves 94.4% recognition accuracy using only 160 features of a face image. Experimental results show that the proposed approach improves face recognition performance using this scheme when training and testing on images captured under variable illumination and expression
Total Data Quality Management and Total Information Quality Management Applied to Costumer Relationship Management
Data quality (DQ) is an important issue for modern organizations, mainly for decision-making based on information, using solutions such as CRM, Business Analytics, and Big Data. In order to obtain quality data, it is necessary to implement methods, processes, and specific techniques that handle information as a product, with well established, controlled, and managed production processes. The literature provides several types of quality data management methodologies that treat structured data, and few treating semi- and non-structured data. Choosing the methodology to be adopted is one the major issues faced by organizations, when challenged to treat the data quality in a systematic manner. This paper makes a comparative analysis between TDQM -- Total Data Quality Management and TIQM -- Total Information Quality Management approaches, focusing on data quality problems in the context of a CRM -- Costumer Relationship Management application. Such analysis identifies the strengths and weaknesses of each methodology and suggests the most suitable for the CRM scenario.
Women's views and postpartum follow-up in the CHIPS Trial (Control of Hypertension in Pregnancy Study).
OBJECTIVE To compare women's views about blood pressure (BP) control in CHIPS (Control of Hypertension In Pregnancy Study) (NCT01192412). DESIGN Quantitative and qualitative analysis of questionnaire responses. SETTING International randomised trial (94 sites, 15 countries). POPULATION/SAMPLE 911 (92.9%) women randomised to 'tight' (target diastolic blood pressure, 85mmHg) or 'less tight' (target diastolic blood pressure, 100mmHg) who completed questionnaires. METHODS A questionnaire was administered at ∼6-12 weeks postpartum regarding post-discharge morbidity and views about trial participation. Questionnaires were administered by the site co-ordinator, and contact was made by phone, home or clinic visit; rarely, data was collected from medical records. Quantitative analyses were Chi-square or Fisher's exact test for categorical variables, mixed effects multinomial logistic regression to adjust for confounders, and p<0.001 for statistical significance. NVivo software was used for thematic analysis of women's views. MAIN OUTCOME MEASURES Satisfaction, measured as willingness to have the same treatment in another pregnancy or recommend that treatment to a friend. RESULTS Among the 533 women in 'tight' (N=265) vs. 'less tight' (N=268) control who provided comments for qualitative analysis, women in 'tight' (vs. 'less tight') control made fewer positive comments about the amount of medication taken (5 vs. 28 women, respectively) and intensity of BP monitoring (7 vs. 17, respectively). However, this did not translate into less willingness to either have the same treatment in another pregnancy (434, 95.8% vs. 423, 92.4%, respectively; p=0.14) or recommend that treatment to a friend (435, 96.0% and 428, 93.4%, respectively; p=0.17). Importantly, although satisfaction remained high among women with an adverse outcome, those in 'tight' control who suffered an adverse outcome (vs. those who did not) were not consistently less satisfied, whereas this was not the case among women in 'less tight' control among whom satisfaction was consistently lower for the CHIPS primary outcome (p<0.001), severe hypertension (p≤0.01), and pre-eclampsia (p<0.001). CONCLUSIONS Women in 'tight' (vs. 'less tight') control were equally satisfied with their care, and more so in the face of adverse perinatal or maternal outcomes.
Dual polarization antennas for UHF RFID readers
This paper presents various concepts of switching polarization in patch antenna dedicated for UHF RFID readers. Proposed designs allow for switching between linear and circular polarization. The first design does not require electronic switching as the polarization can be changed by choosing one of two available feeding terminals. Two remaining designs use PIN diode or FET SPDT switch.
Caffeic acid phenethyl ester prevents cerebellar granule neurons (CGNs) against glutamate-induced neurotoxicity
Caffeic acid phenethyl ester (CAPE) is an active component of propolis obtained from honeybee hives and is found to have the following properties: anti-mitogenic, anti-carcinogenic, anti-inflammatory, immunomodulatory, and antioxidant. Recent reports suggest that CAPE also has a neuronal protective property against ischemic injury. Since excitotoxicity may play an important role in ischemia, in this study, we investigated whether CAPE could directly protect neurons against excitotoxic insult. We treated cultured rat cerebellar granule neurons (CGNs) with excitotoxic concentrations of glutamate in the presence or absence of CAPE and found that CAPE markedly protected neurons against glutamate-induced neuronal death in a concentration-dependent fashion. Glutamate-induced CGNs death is associated with time-dependent activation of caspase-3 and phosphorylation of p38, both events of which can be blocked by CAPE. Treating CGNs with specific inhibitors of these two enzymes together exerts a synergistic neuroprotective effect, similar to the neuroprotective effect of CAPE exposure. These results suggest that CAPE is able to block glutamate-induced excitotoxicity by inhibiting phosphorylation of p38 and caspase-3 activation. This finding may further help understanding of the mechanism of glutamate-induced neuronal death and CAPE-induced neuroprotection against excitotoxicity.
Usefulness of two instruments in assessing depression among elderly Mexicans in population studies and for primary care.
OBJECTIVE To determine the psychometric qualities of the CES-DR and GDS scales in the elderly and compare them to clinical psychiatric diagnoses. MATERIAL AND METHODS The first phase consisted of home interviews for determining the psychometric qualities of the GDS and CES-DR scales. In the second phase, psychiatrists conducted diagnostic interviews. The sample consisted of 534 participants older than 60 years of age insured by the Mexican Institute of Social Security. RESULTS First phase: Cronbach's alpha for the GDS was 0.87 and 0.86 for CES-DR. The GDS factorial analysis found eight factors that could explain 53.5% of the total variance and nine factors that explained 57.9% in the CES-DR. Second phase: Compared to the psychiatric diagnoses, CES-DR reported a sensitivity of 82% and a specificity of 49.2%; GDS reported 53.8% sensitivity and 78.9% specificity. CONCLUSIONS CES-DR and GDS scales have high reliability and adequate validity but the CES-DR reports higher sensitivity.
Emergency department and 'Google flu trends' data as syndromic surveillance indicators for seasonal influenza.
We evaluated syndromic indicators of influenza disease activity developed using emergency department (ED) data - total ED visits attributed to influenza-like illness (ILI) ('ED ILI volume') and percentage of visits attributed to ILI ('ED ILI percent') - and Google flu trends (GFT) data (ILI cases/100 000 physician visits). Congruity and correlation among these indicators and between these indicators and weekly count of laboratory-confirmed influenza in Manitoba was assessed graphically using linear regression models. Both ED and GFT data performed well as syndromic indicators of influenza activity, and were highly correlated with each other in real time. The strongest correlations between virological data and ED ILI volume and ED ILI percent, respectively, were 0·77 and 0·71. The strongest correlation of GFT was 0·74. Seasonal influenza activity may be effectively monitored using ED and GFT data.
Knowledge management approaches in managing agricultural indigenous and exogenous knowledge in Tanzania
Purpose The purpose of this study is to assess the application of knowledge management (KM) models in managing and integrating indigenous and exogenous knowledge for improved farming activities in Tanzania, by examining the management of indigenous knowledge (IK), access and use of exogenous knowledge, the relevancy of policies, legal framework, information and communication technologies (ICTs), and culture in KM practices in the communities. Design/methodology/approach – Semi-structured interviews were used to collect qualitative and quantitative data from 181 farmers in six districts of Tanzania. Four IK policy makers were also interviewed. Findings – The study demonstrated that western based KM models should be applied cautiously in a developing world context. Both indigenous and exogenous knowledge were acquired and shared in different contexts. IK was shared within a local, small and spontaneous network, while exogenous knowledge was shared in a wide context, where formal sources of knowledge focused on disseminating exogenous knowledge more than IK. Policies, legal framework, ICTs and culture determined access to knowledge in the communities. The study thus developed a KM model that would be applicable in the social context of developing countries. Research limitations/implications – The study necessitates a need to test the developed model against existing KM models, in a specific context such as local communities of developing world, to determine if it better at explaining the link between KM principles and KM processes Originality/value – The proposed KM model provides a deep understanding on the management and integration of agricultural indigenous and exogenous knowledge in the rural areas of developing countries. Previous KM models were developed in the context of organizational environment, and thus they failed to address the needs of rural communities. The proposed model thus advances theory on KM in developing countries, and provides linkages between KM processes and KM principles.
The effect of LUT and cluster size on deep-submicron FPGA performance and density
We use a fully timing-driven experimental flow [4] [15] in which a set of benchmark circuits are synthesized into different cluster-based [2] [3] [15] logic block architectures, which contain groups of LUTs and flip-flops. We look across all architectures with LUT sizes in the range of 2 inputs to 7 inputs, and cluster size from 1 to 10 LUTs. In order to judge the quality of the architecture we do both detailed circuit level design and measure the demand of routing resources for every circuit in each architecture. These experiments have resulted in several key contributions. First, we have experimentally determined the relationship between the number of inputs required for a cluster as a function of the LUT size (K) and cluster size (N). Second, contrary to previous results, we have shown that when the cluster size is greater than four, that smaller LUTs (size 2 and 3) are almost as area efficient as 4-input LUTs, as suggested in [11]. However, our results also show that the performance of FPGAs with these small LUT sizes is significantly worse (by almost a factor of 2) than larger LUTs. Hence, as measured by area-delay product, or by performance, these would be a bad choice. Also, we have discovered that LUT sizes of 5 and 6 produce much better area results than were previously believed. Finally, our results show that a LUT size of 4 to 6 and cluster size of between 4 and 10 provides the best area-delay product for an FPGA.
Learn the New, Keep the Old: Extending Pretrained Models with New Anatomy and Images
Deep learning has been widely accepted as a promising solution for medical image segmentation, given a sufficiently large representative dataset of images with corresponding annotations. With ever increasing amounts of annotated medical datasets, it is infeasible to train a learning method always with all data from scratch. This is also doomed to hit computational limits, e.g., memory or runtime feasible for training. Incremental learning can be a potential solution, where new information (images or anatomy) is introduced iteratively. Nevertheless, for the preservation of the collective information, it is essential to keep some “important” (i.e., representative) images and annotations from the past, while adding new information. In this paper, we introduce a framework for applying incremental learning for segmentation and propose novel methods for selecting representative data therein. We comparatively evaluate our methods in different scenarios using MR images and validate the increased learning capacity with using our methods.
Fructan and hormone connections
Plants rely on “reserve” (stored) carbon (C) for growth and survival when newly synthesized C becomes limited. Besides a classic yet recalcitrant C reserve starch, fructans, a class of sucrosederived soluble fructosyl-oligosaccharides, represent a major store of C in many temperate plant species including the economically important Asteraceae and Poaceae families (Hendry, 1993). Dicots typically accumulate inulin-type fructans as long-term storage (underground organs) whilst grasses and cereals accumulate fructans as short-term reserves in above-ground parts (Pollock and Cairns, 1991; Van Laere and Van den Ende, 2002). Unlike chloroplast-based water-insoluble starch, fructans are semi-soluble, possess flexible structures (Phelps, 1965; Valluru and Van den Ende, 2008), can be synthesized at low temperatures (Pollock and Cairns, 1991), and are degraded by a single type of fructan hydrolases, fructan exohydrolases (FEHs). Unlike starch that store in plastids, fructans store in vacuoles, which is physically less stressful to the active constituents of, and allows more C synthesis by, the photosynthetic cell, whichmay be different in dicots where fructans do not typically accumulate in green parts. Plants synthesize diverse fructan types exhibiting a wide range of functions (for review, see Valluru and Van den Ende, 2008; Van den Ende, 2013). Fructan biosynthetic enzymes, fructosyltransferases (FTs), which evolved from vacuolar-type acid invertases (VIs) (Altenbach et al., 2009), use sucrose (Suc) as a substrate whereby an organ-specific Suc threshold triggers FT genes at the transcriptional level (Lu et al., 2002). Though the regulatory mechanism of Suc signal transduction remains largely elusive, transcription factors (TFs) can be suspected to mediate such inductive processes either by directly binding and stimulating FT genes (e.g., TaMYB13 TF binds to FT genes, 1-SST and 6-SFT; Xue et al., 2011) or by up-regulating vacuolar based proteins (e.g., TaMYB13 TF up-regulates vacuolar processing enzyme, Taγ-VPE1, whose mRNA levels highly correlated with FTs mRNA levels in wheat stems; Kooiker et al., 2013). In addition, protein phosphatases (PP2A; Martínez-Noël et al., 2009) and second messenger Ca2+ (Martínez-Noël et al., 2006) mediate Suc-induction of fructan synthesis in wheat, although the underlying mechanisms remain largely undefined. The cationic role of Ca2+ in fructan synthesis is somewhat counterintuitive because Suc induces a Ca2+ efflux from the vacuole (Furuichi et al., 2001), the site of fructan synthesis. Perhaps Suc might ensure more alkaline (less acidic) vacuolar environment [Suc-induces Slowly activating Vacuolar (SV) ion channel that transiently effluxes vacuolar Ca2+; (Pottosin and Schönknecht, 2007)], favoring fructan synthesis that is thought to be less stable under low pH (Flores-Maltos et al., 2014). Some of the protein mediators involved in Suc-mediated induction of fructan synthesis, including Ca2+ signaling components, calmodulin (CaM), calcineurin B-like (CBL1), and Ca2+–dependent protein kinases (CDPKs), are closely involved in hormone signaling and environmental stress (Ludwig et al., 2004).
Effects of a FLAP inhibitor, GSK2190915, in asthmatics with high sputum neutrophils.
Patients with refractory asthma frequently have neutrophilic airway inflammation and respond poorly to inhaled corticosteroids. This study evaluated the effects of an oral 5-lipoxygenase-activating protein (FLAP) inhibitor, GSK2190915, in patients with asthma and elevated sputum neutrophils. Patients received 14 (range 13-16) days treatment with GSK2190915 100 mg and placebo with a minimum 14 day washout in a double-blind, cross-over, randomised design (N = 14). Sputum induction was performed twice pre-dose in each treatment period to confirm sputum neutrophilia, and twice at the end of each treatment period. The primary endpoint was the percentage and absolute sputum neutrophil count, averaged for end-of-treatment visits. GSK2190915 did not significantly reduce mean percentage sputum neutrophils (GSK2190915-placebo difference [95% CI]: -0.9 [-12.0, 10.3]), or mean sputum neutrophil counts (GSK2190915/placebo ratio [95% CI]: 1.06 [0.43, 2.61]). GSK2190915 resulted in a marked suppression (>90%) of sputum LTB4 and urine LTE4, but did not alter clinical endpoints. There were no safety issues. Despite suppressing the target mediator LTB4, FLAP inhibitor GSK2190915 had no short-term effect on sputum cell counts or clinical endpoints in patients with asthma and sputum neutrophilia.
Creating a parametric muqarnas utilizing algorithmic software
Muqarnas is one of the most beautiful elements of Persian architecture which was used to cover the great height of entrance spaces or domes in old mosques or religious schools. Although rapid growth of digital design software brought lots of innovations and ease of use to the world of architecture, this specific vernacular art reached the state of abandonment. This article focuses on modelling Persian patterns by using Grasshopper3D, a Rhinoceros plug-in and by demonstrating the process it hopes to create a basis for a full 3d parametric muqarnas application. Utilizing such software, it is probable to generate desired patterns with the help of today’s algorithmic technology and revitalize muqarnas and other Persian patterns and define them as contemporary architectural elements of Persian architecture.
Detection and Localization of Image Forgeries Using Resampling Features and Deep Learning
Resampling is an important signature of manipulated images. In this paper, we propose two methods to detect and localize image manipulations based on a combination of resampling features and deep learning. In the first method, the Radon transform of resampling features are computed on overlapping image patches. Deep learning classifiers and a Gaussian conditional random field model are then used to create a heatmap. Tampered regions are located using a Random Walker segmentation method. In the second method, resampling features computed on overlapping image patches are passed through a Long short-term memory (LSTM) based network for classification and localization. We compare the performance of detection/localization of both these methods. Our experimental results show that both techniques are effective in detecting and localizing digital image forgeries.
Additive Manufacturing Technologies : 3 D Printing , Rapid Prototyping , and Direct Digital Manufacturing ”
“Additive Manufacturing Technologies: 3D Printing, Rapid Prototyping, and Direct Digital Manufacturing” is authored by Ian Gibson, David Rosen and Brent Stucker, who collectively possess 60 years’ experience in the fi eld of additive manufacturing (AM). This is the second edition of the book which aims to include current developments and innovations in a rapidly changing fi eld. Its primary aim is to serve as a teaching aid for developing and established curricula, therefore becoming an all-encompassing introductory text for this purpose. It is also noted that researchers may fi nd the text useful as a guide to the ‘state-of-the-art’ and to identify research opportunities. The book is structured to provide justifi cation and information for the use and development of AM by using standardised terminology to conform to standards (American Society for Testing and Materials (ASTM) F42) introduced since the fi rst edition. The basic principles and historical developments for AM are introduced in summary in the fi rst three chapters of the book and this serves as an excellent introduction for the uninitiated. Chapters 4–11 focus on the core technologies of AM individually and, in most cases, in comprehensive detail which gives those interested in the technical application and development of the technologies a solid footing. The remaining chapters provide guidelines and examples for various stages of the process including machine and/or materials selection, design considerations and software limitations, applications and post-processing considerations.
PRECISION PIVOT IRRIGATION CONTROLS TO OPTIMIZE WATER APPLICATION
A precision control system that enables a center pivot irrigation system (CP) to precisely supply water in optimal rates relative to the needs of individual areas within fields was developed through a collaboration between the Farmscan group (Perth, Western Australia) and the University of Georgia Precision Farming team at the National Environmentally Sound Production Agriculture Laboratory (NESPAL) in Tifton, GA. The control system, referred to as Variable-Rate Irrigation (VRI), varies application rate by cycling sprinklers on and off and by varying the CP travel speed. Desktop PC software is used to define application maps which are loaded into the VRI controller. The VRI system uses GPS to determine pivot position/angle of the CP mainline. Results from VRI system performance testing indicate good correlation between target and actual application rates and also shows that sprinkler cycling on/off does not alter the CP uniformity. By applying irrigation water in this precise manner, water application to the field is optimized. In many cases, substantial water savings can be realized.
Automatic Stress Classification With Pupil Diameter Analysis
is an open access repository that collects the work of Arts et Métiers ParisTech researchers and makes it freely available over the web where possible. This article proposes a method based on wavelet transform and neural networks for relating pupillary behavior to psychological stress. The proposed method was tested by recording pupil diameter and electrodermal activity during a simulated driving task. Self-report measures were also collected. Participants performed a baseline run with the driving task only, followed by three stress runs where they were required to perform the driving task along with sound alerts, the presence of two human evaluators, and both. Self-reports and pupil diameter successfully indexed stress manipulation, and significant correlations were found between these measures. However, electrodermal activity did not vary accordingly. After training, the four-way parallel neu-ral network classifier could guess whether a given unknown pupil diameter signal came from one of the four experimental trials with 79.2% precision. The present study shows that pupil diameter signal has good discriminating power for stress detection. 1. INTRODUCTION Stress detection and measurement are important issues in several human–computer interaction domains such as Affective Computing, Adaptive Automation, and Ambient Intelligence. In general, researchers and system designers seek to estimate the psychological state of operators in order to adapt or redesign the working environment accordingly (Sauter, 1991). The primary goal of such adaptation is to enhance overall system performance, trying to reduce workers' psychophysi-cal detriment (e. One key aspect of stress measurement concerns the recording of physiological parameters, which are known to be modulated by the autonomic nervous system (ANS). However, despite
Drug-exposed infants and developmental outcome: effects of a home intervention and ongoing maternal drug use.
OBJECTIVE To evaluate the effects of a home intervention and ongoing maternal drug use on the developmental outcome of drug-exposed infants. DESIGN Longitudinal randomized cohort study of a home intervention with substance-abusing mothers and their infants. Mother-infant dyads were randomly assigned to a control or intervention group at 2 weeks' post partum. Control families received brief monthly tracking visits. Intervention families received weekly home visits from 0 to 6 months and biweekly visits from 6 to 18 months by trained lay visitors. PARTICIPANTS One hundred eight low-income, inner-city, drug-exposed children (control, 54; intervention, 54) who underwent developmental testing at 6, 12, and 18 months post partum and who remained with their biological mothers through 18 months. MAIN OUTCOME MEASURES Infant scores from the Bayley Scales of Infant Development (BSID) at 6, 12, and 18 months post partum. Maternal report of drug use during the pregnancy and ongoing drug use through 18 months post partum was assessed. RESULTS In the repeated-measures analyses, intervention infants had significantly higher BSID Mental Developmental Index (MDI) and Psychomotor Developmental Index scores than control infants. Ongoing maternal cocaine and/or heroin use was associated with lower MDI scores. Finally, MDI scores decreased significantly in both groups. CONCLUSIONS Ongoing maternal drug use is associated with worse developmental outcomes among a group of drug-exposed infants. A home intervention led to higher BSID scores among drug-exposed infants. However, BSID MDI scores decreased during the first 18 months post partum among inner-city, low-socioeconomic-status infants in the present study.
Facebook and texting made me do it: Media-induced task-switching while studying
Electronic communication is emotionally gratifying, but how do such technological distractions impact academic learning? The current study observed 263 middle school, high school and university students studying for 15 min in their homes. Observers noted technologies present and computer windows open in the learning environment prior to studying plus a minute-by-minute assessment of on-task behavior, off-task technology use and open computer windows during studying. A questionnaire assessed study strategies, task-switching preference, technology attitudes, media usage, monthly texting and phone calling, social networking use and grade point average (GPA). Participants averaged less than six minutes on task prior to switching most often due to technological distractions including social media, texting and preference for task-switching. Having a positive attitude toward technology did not affect being on-task during studying. However, those who preferred to task-switch had more distracting technologies available and were more likely to be off-task than others. Also, those who accessed Facebook had lower GPAs than those who avoided it. Finally, students with relatively high use of study strategies were more likely to stay on-task than other students. The educational implications include allowing students short ‘‘technology breaks’’ to reduce distractions and teaching students metacognitive strategies regarding when interruptions negatively impact learning. 2012 Elsevier Ltd. All rights reserved.
Malar mounds and festoons: review of current management.
Blepharoplasty, the most common aesthetic eyelid procedure, sometimes involves a challenging patient subgroup: those who present with malar edema, malar bags, and festoons. In this review article, the authors describe the relevant anatomy in festoon development, discuss the pathophysiological basis of this condition spectrum, outline clinical examination basics, summarize various surgical approaches for treatment and propose an algorithm for their application, and describe the most common postsurgical complications.
Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning
To improve the quality of computation experience for mobile devices, mobile-edge computing (MEC) is a promising paradigm by providing computing capabilities in close proximity within a sliced radio access network (RAN), which supports both traditional communication and MEC services. Nevertheless, the design of computation offloading policies for a virtual MEC system remains challenging. Specifically, whether to execute a computation task at the mobile device or to offload it for MEC server execution should adapt to the time-varying network dynamics. This paper considers MEC for a representative mobile user in an ultra-dense sliced RAN, where multiple base stations (BSs) are available to be selected for computation offloading. The problem of solving an optimal computation offloading policy is modelled as a Markov decision process, where our objective is to maximize the long-term utility performance whereby an offloading decision is made based on the task queue state, the energy queue state as well as the channel qualities between MU and BSs. To break the curse of high dimensionality in state space, we first propose a double deep Q-network (DQN) based strategic computation offloading algorithm to learn the optimal policy without knowing a priori knowledge of network dynamics. Then motivated by the additive structure of the utility function, a Q-function decomposition technique is combined with the double DQN, which leads to a novel learning algorithm for the solving of stochastic computation offloading. Numerical experiments show that our proposed learning algorithms achieve a significant improvement in computation offloading performance compared with the baseline policies.
A neural-network algorithm for a graph layout problem
We present a neural-network algorithm for minimizing edge crossings in drawings of nonplanar graphs. This is an important subproblem encountered in graph layout. The algorithm finds either the minimum number of crossings or an approximation thereof and also provides a linear embedding realizing the number of crossings found. The parallel time complexity of the algorithm is O(1) for a neural network with n(2) processing elements, where n is the number of vertices of the graph. We present results from testing a sequential simulator of the algorithm on a set of nonplanar graphs and compare its performance with the heuristic of Nicholson.
Normal serum bone markers in bisphosphonate-induced osteonecrosis of the jaws.
We obtained serum bone markers and other relevant endocrine assays on 5 patients with osteonecrosis of the jaw (ONJ). The assays were C-telopeptide, N-telopeptide, bone-specific alkaline phosphatase, osteocalcin, intact parathyroid hormone, T3, T4, TSH, and Vitamin D 25 hydroxy. Diagnostic criteria for ONJ were those formulated by the American Association of Oral and Maxillofacial Surgeons. Four of our patients were women. Two had metastatic breast cancer and had been treated with zoledronic acid; one had also received pamidronate. Two others had osteoporosis and had been treated with daily alendronate. One man had metastatic prostate cancer treated with zoledronic acid. All patients had been withdrawn from bisphosphonate for at least 6 months. None were taking or had taken corticosteroids. None of the lesions had shown any significant healing and all were still causing the patients considerable distress. Yet the bone markers were within the normal range as measured in our laboratory, except for intact parathyroid hormone, which was slightly elevated in one case of metastatic breast cancer (177 pg/mL). Because the jaws have a greater blood supply than other bones, and a high bone turnover rate, bisphosphonates are highly concentrated in the jaws. This anatomic concentration of bisphosphonates might cause bisphosphonate-osteonecrosis to be manifested exclusively in the jaws and is consistent with our finding of normal serum bone markers in ONJ patients.
React Native application development – A comparison between native Android and React Native
Creating a mobile application often requires the developers to create one for Android och one for iOS, the two leading operating systems for mobile devices. The two applications may have the same layout and logic but several components of the user interface (UI) will differ and the applications themselves need to be developed in two different languages. This process is gruesome since it is time consuming to create two applications and it requires two different sets of knowledge. There have been attempts to create techniques, services or frameworks in order to solve this problem but these hybrids have not been able to provide a native feeling of the resulting applications. This thesis has evaluated the newly released framework React Native that can create both iOS and Android applications by compiling the code written in React. The resulting applications can share code and consists of the UI components which are unique for each platform. The thesis focused on Android and tried to replicate an existing Android application in order to measure user experience and performance. The result was surprisingly positive for React Native as some user could not tell the two applications apart and nearly all users did not mind using a React Native application. The performance evaluation measured GPU frequency, CPU load, memory usage and power consumption. Nearly all measurements displayed a performance advantage for the Android application but the differences were not protruding. The overall experience is that React Native a very interesting framework that can simplify the development process for mobile applications to a high degree. As long as the application itself is not too complex, the development is uncomplicated and one is able to create an application in very short time and be compiled to both Android and iOS. First of all I would like to express my deepest gratitude for Valtech who aided me throughout the whole thesis with books, tools and knowledge. They supplied me with two very competent consultants Alexander Lindholm and Tomas Tunström which made it possible for me to bounce off ideas and in the end having a great thesis. Furthermore, a big thanks to the other students at Talangprogrammet who have supported each other and me during this period of time and made it fun even when it was as most tiresome. Furthermore I would like to thank my examiner Erik Berglund at Linköpings university who has guided me these last months and provided with insightful comments regarding the paper. Ultimately I would like to thank my family who have always been there to support me and especially my little brother who is my main motivation in life.
Energy Harvesting for Self-Powered Wearable Devices
Personalized sensor networks optionally should include wearable sensors or a body area network (BAN) wirelessly connected to a home computer or a remote computer through long-distance devices, such as a personal digital assistant or a mobile phone. While long-distance data transmission can typically be performed only by using the batteries as a power supply, the sensors with a short-distance wireless link can be powered autonomously. The idea of a self-powered device is not new and is actually known for centuries. The earliest example of self-powered wearable device is the self-winding watch invented in about 1770. However, typically not much energy is harvested in a small device, so that use of a battery, primary or rechargeable, is beneficial from practical point of view. There are worldwide efforts ongoing on development of microgenerators that should eliminate the necessity of wiring and batteries in autonomous and stand-alone devices or in devices that are difficult to access. Energy harvesters are being developed for the same purpose. An energy harvester (also called an energy scavenger) is a relatively small power generator that does not require fossil fuel. Instead, it uses energy available in the ambient, such as an electromagnetic energy, vibrations, a wind, a water flow, and a thermal energy. These sources are the same as those used in power plants or power generators such as the ones for powering houses in remote locations, light towers, spacecrafts, and on transport (except those based on fossil fuels). An energy harvester is typically several-to-one centimeter-size power microplant that converts into electricity any primary energy that is available in the ambient. The reason to call them “harvesters” or “scavengers” is the new application area: they are used for powering small devices, such as sensors or sensor nodes. This way of powering them eliminates the need for cost-ineffective work, such as wiring or either
Statics and Dynamics of Continuum Robots With General Tendon Routing and External Loading
Tendons are a widely used actuation strategy for continuum robots that enable forces and moments to be transmitted along the robot from base-mounted actuators. Most prior robots have used tendons routed in straight paths along the robot. However, routing tendons through general curved paths within the robot offers potential advantages in reshaping the workspace and enabling a single section of the robot to achieve a wider variety of desired shapes. In this paper, we provide a new model for the statics and dynamics of robots with general tendon routing paths that is derived by coupling the classical Cosserat-rod and Cosserat-string models. This model also accounts for general external loading conditions and includes traditional axially routed tendons as a special case. The advantage of the usage of this coupled model for straight-tendon robots is that it accounts for the distributed wrenches that tendons apply along the robot. We show that these are necessary to consider when the robot is subjected to out-of-plane external loads. Our experimental results demonstrate that the coupled model matches experimental tip positions with an error of 1.7% of the robot length, in a set of experiments that include both straight and nonstraight routing cases, with both point and distributed external loads.
Within-class covariance normalization for SVM-based speaker recognition
This paper extends the within-class covariance normalizat ion (WCCN) technique described in [1, 2] for training generaliz ed linear kernels. We describe a practical procedure for apply ing WCCN to an SVM-based speaker recognition system where the input feature vectors reside in a high-dimensional space. O ur approach involves using principal component analysis (PCA) t o split the original feature space into two subspaces: a low-dimens ional “PCA space” and a high-dimensional “PCA-complement space. ” After performing WCCN in the PCA space, we concatenate the resulting feature vectors with a weighted version of their P CAcomplements. When applied to a state-of-the-art MLLR-SVM speaker recognition system, this approach achieves improv ements of up to 22% in EER and 28% in minimum decision cost function (DCF) over our previous baseline. We also achieve substanti al improvements over an MLLR-SVM system that performs WCCN in the PCA space but discards the PCA-complement.
Constructionist Design Methodology for Interactive Intelligences
We present a methodology for designing and implementing interactive intelligences. The Constructionist Design Methodology (CDM) – so called because it advocates modular building blocks and incorporation of prior work – addresses factors that we see as key to future advances in A.I., including interdisciplinary collaboration support, coordination of teams and large-scale systems integration. We test the methodology by building an interactive multi-functional system with a real-time perception-action loop. The system, whose construction relied entirely on the methodology, consists of an embodied virtual agent that can perceive both real and virtual objects in an augmented-reality room and interact with a user through coordinated gestures and speech. Wireless tracking technologies give the agent awareness of the environment and the user’s speech and communicative acts. User and agent can communicate about things in the environment, their placement and function, as well as more abstract topics such as current news, through situated multimodal dialog. The results demonstrate CDM’s strength in simplifying the modeling of complex, multi-functional systems requiring architectural experimentation and exploration of unclear sub-system boundaries, undefined variables, and tangled data flow and control hierarchies. Introduction The creation of embodied humanoids and broad A.I. systems requires integration of a large number of functionalities that must be carefully coordinated to achieve coherent system behavior. We are working on formalizing a methodology that can help in this process. The architectural foundation we have chosen for the approach is based on the concept of a network of interacting modules, communicating via messages. To test the design methodology we chose a system with a human user that interacts in real-time with a simulated human, in an augmented-reality environment. In this paper we present the design methodology and describe the system that we built to test it. Newell [1992] urged for the search of unified theories of cognition, and recent work in A.I. has increasingly focused on integration of multiple systems (cf. [Simmons et 1 While Newell’s architecture Soar is based on a small set of general principles, intended to explain a wide range of cognitive phenomena, Newell makes it very clear in his book [Newell 1992] that he does not consider Soar to be the unified theory of cognition. We read his call for unification not in the narrow sense to mean the particular premises he chose for Soar, but rather in the more broad sense to refer to the general breadth of cognitive models. Constructionist Design Methodology for Interactive Intelligences K.R.Thórisson et al., 2004 Accepted to AAAI Magazine, Dec. 2003 2 al. 2003, McCarthy et al. 2002, Bischoff et al. 1999]). Unified theories necessarily mean integration of many functionalities, but our prior experience in building systems that integrate multiple features from artificial intelligence and computer graphics [Bryson & Thórisson 2000, Lucente 2000, Thórisson 1999] has made it very clear that such integration can be a challenge, even for a team of experienced developers. In addition to basic technical issues – connecting everything together can be prohibitive in terms of time – it can be difficult to get people with different backgrounds, such as computer graphics, hardware, and artificial intelligence, to communicate effectively. Coordinating such an effort can thus be a management task of a tall order; keeping all parties synchronized takes skill and time. On top of this comes the challenge of deciding the scope of the system: What seems simple to a computer graphics expert may in fact be a long-standing dream of the A.I. person, and vice versa. Several factors motivate our work. First, a much-needed move towards building on prior work in A.I., to promote incremental accumulation of knowledge in creating intelligent systems, is long overdue. The relatively small group who is working on broad models of mind, bridging across disciplines, needs better ways to share results and work together, and to work with others outside their field. To this end our principles foster re-usable software components, through a common middleware specification, and mechanisms for defining interfaces between components. Second, by focusing on the re-use of existing work we are able to support the construction of more powerful systems than otherwise possible, speeding up the path towards useful, deployable systems. Third, we believe that to study mental mechanisms they need to be embedded in a larger cognitive model with significant breadth, to contextualize their operation and enable their testing under boundary conditions. This calls for an increased focus on supporting large-scale integration and experimentation. Fourth, by bridging across multiple functionalities in a single, unified system, researchers’ familiarity and breadth of experience with the various models of thought to date – as well as new ones – increases. This is important – as are in fact all of the above points – when the goal is to develop unified theories of cognition. Inspired to a degree by the classic LEGO bricks, our methodology – which we call a Constructionist Approach to A.I. – puts modularity at its center: Functionalities of the system are broken into individual software modules, which are typically larger than software classes (i.e. objects and methods) in object-oriented programming, but smaller than the typical enterprise application. The role of each module is determined in part by specifying the message types and information content that needs to flow between the various functional parts of the system. Using this functional outline we then define and develop, or select, components for perception, knowledge representation, planning, animation, and other desired functionalities. Behind this work lies the conjecture that the mind can be modeled through the adequate combination of interacting, functional machines (modules). Of course, this is still debated in the research community and not all researchers are convinced of its merits. However, this claim is in its essence simply a combination of two less radical ones. First, that a divide-and-conquer methodology will be fruitful in studying the mind as a system. Since practically all scientific results since the Greek philosophers are based on this, it is hard to argue against it. In contrast to the search for unified 2 http://www.MINDMAKERS.ORG Constructionist Design Methodology for Interactive Intelligences K.R.Thórisson et al., 2004 Accepted to AAAI Magazine, Dec. 2003 3 theories in physics, we see the search for unified theories of cognition in the same way as articulated in Minsky’s [1986] theory, that the mind is a multitude of interacting components, and his (perhaps whimsical but fundamental) claim that the brain is a hack. In other words, we expect a working model of the mind to incorporate, and coherently address, what at first seems a tangle of control hierarchies and data paths. Which relates to another important theoretical stance: The need to model more than a single or a handful of the mind’s mechanisms in isolation in order to understand the working mind. In a system of many modules with rich interaction, only a model incorporating a rich spectrum of (animal or human) mental functioning will give us a correct picture of the broad principles underlying intelligence. Figure 1: Our embodied agent Mirage is situated in the lab. Here we see how he appears to the user through the head-mounted glasses. (Image has been enhanced for clarity.) There is essentially nothing in the Constructionist approach to A.I. that lends it more naturally to behavior-based A.I. [c.f. Brooks 1991] or “classical” A.I. – its principles sit beside both. In fact, since CDM is intended to address the integration problem of very broad cognitive systems, it must be able to encompass all variants and approaches to date. We think it unlikely that any of the principles we present will be found objectionable, or even completely novel for that matter, by a seasoned software engineer. But these principles are custom-tailored to guide the construction of large cognitive systems, and we hope it will be used, extended and improved by many others over time. To test the power of a new methodology, a novel problem is preferred over one that has a known solution. The system we chose to develop presented us with a unique scope and unsolved integration issues: An augmented reality setting inhabited by an embodied virtual character; the character would be visible via a see-through stereoscopic display that the user wears, and would help them navigate the real-world environment. The character, called Mirage, should appear as a transparent, ghost-like 3 Personal communication, 1994. Constructionist Design Methodology for Interactive Intelligences K.R.Thórisson et al., 2004 Accepted to AAAI Magazine, Dec. 2003 4 stereoscopic 3-D graphic superimposed on the user’s real world view (Figure 1). This system served as a test-bed for our methodology; it is presented in sufficient detail here to demonstrate the application of the methodology and to show its modular philosophy, which it mirrors closely.
An Empirical Comparison of Parsing Methods for Stanford Dependencies
Stanford typed dependencies are a widely desired representation of natural language sentences, but parsing is one of the major computational bottlenecks in text analysis systems. In light of the evolving definition of the Stanford dependencies and developments in statistical dependency parsing algorithms, this paper revisits the question of Cer et al. (2010): what is the tradeoff between accuracy and speed in obtaining Stanford dependencies in particular? We also explore the effects of input representations on this tradeoff: part-of-speech tags, the novel use of an alternative dependency representation as input, and distributional representaions of words. We find that direct dependency parsing is a more viable solution than it was found to be in the past. An accompanying software release can be found at: http://www.ark.cs.cmu.edu/TBSD
Chapter 1 Fish Cytokines and Immune Response
The immune system can be defined as a complex system that protects the organism against organisms or substances that might cause infection or disease. One of the most fascinating characteristics of the immune system is its capability to recognize and respond to pathogens with significant specificity. Innate and adaptive immune responses are able to recognize for‐ eign structures and trigger different molecular and cellular mechanisms for antigen elimina‐ tion. The immune response is critical to all individuals; therefore numerous changes have taken place during evolution to generate variability and specialization, although the im‐ mune system has conserved some important features over millions of years of evolution that are common for all species. The emergence of new taxonomic categories coincided with the diversification of the immune response. Most notably, the emergence of vertebrates coincid‐ ed with the development of a novel type of immune response. Apparently, vertebrates in‐ herited innate immunity from their invertebrate ancestors [1].
Training Confidence-calibrated Classifiers for Detecting Out-of-Distribution Samples
The problem of detecting whether a test sample is from in-distribution (i.e., training distribution by a classifier) or out-of-distribution sufficiently different from it arises in many real-world machine learning applications. However, the state-of-art deep neural networks are known to be highly overconfident in their predictions, i.e., do not distinguish inand out-of-distributions. Recently, to handle this issue, several threshold-based detectors have been proposed given pre-trained neural classifiers. However, the performance of prior works highly depends on how to train the classifiers since they only focus on improving inference procedures. In this paper, we develop a novel training method for classifiers so that such inference algorithms can work better. In particular, we suggest two additional terms added to the original loss (e.g., cross entropy). The first one forces samples from out-of-distribution less confident by the classifier and the second one is for (implicitly) generating most effective training samples for the first one. In essence, our method jointly trains both classification and generative neural networks for out-of-distribution. We demonstrate its effectiveness using deep convolutional neural networks on various popular image datasets.
Information needs in bug reports: improving cooperation between developers and users
For many software projects, bug tracking systems play a central role in supporting collaboration between the developers and the users of the software. To better understand this collaboration and how tool support can be improved, we have quantitatively and qualitatively analysed the questions asked in a sample of 600 bug reports from the MOZILLA and ECLIPSE projects. We categorised the questions and analysed response rates and times by category and project. Our results show that the role of users goes beyond simply reporting bugs: their active and ongoing participation is important for making progress on the bugs they report. Based on the results, we suggest four ways in which bug tracking systems can be improved.
Vulnerable Elderly Survey 13 as a screening method for frailty in Polish elderly surgical patient--prospective study.
UNLABELLED The Vulnerable Elders Survey (VES-13) is a simple function based frailty screening tool that can be also administered by the nonclinical personnel within 5 minutes and has been validated in the out- and in patient clinic and acute medical care settings. The aim of the study was to validate the accuracy of the VES-13 screening method for predicting the frailty syndrome based on a CGA in polish surgical patients. MATERIAL AND METHODS We included prospectively 106 consecutive patients ≥65, that qualify for abdominal surgery (both due to oncological and benign reasons), at the tertiary referral hospital.We evaluated the diagnostic performance of VES-13 score comparing to the results from the CGA, accepted as the gold standard for identifying at risk frail elderly patients. RESULTS The prevalence of frailty as diagnosed by CGA was 59.4%. There was significantly higher number of frail patients in the oncological group (78% vs. 31%; p<0.01). According to the frailty screening methods, the frailty prevalence was 45.3%. The VES-13 score had a 60% sensitivity and 78% specificity in detecting frailty syndrome. The positive and negative predictive value was 81% and 57%, respectively. The overall predictive capacity was intermediate (AUC=0.69) CONCLUSIONS: At present, the VES-13 screening tool for older patients cannot replace the comprehensive geriatric assessment; this is due to the insufficient discriminative power to select patients for further assessment. It might be helpful in a busy clinical practice and in facilities that do not have trained personal for geriatric assessment.
Fuzzy line graphs
Mordeson, J.N., Fuzzy line graphs, Pattern Recognition Letters 14 (1993) 381 384. The notion of a fuzzy line graph of a fuzzy graph is introduced. We give a necessary and sufficient condition for a fuzzy graph to be isomorphic to its corresponding fuzzy line graph. We examine when an isomorphism between two fuzzy graphs follows from an isomorphism of their corresponding fuzzy line graphs. We give a necessary and sufficient condition for a fuzzy graph to be the fuzzy line graph of some fuzzy graph.
La caldera de la Valle del Bove: sa signification dans l’évolution de l’Etna (Sicile)
The Valle del Bove, situated on the east side of Mount Etna is considered as the result of an important collapse. The structural survey of the continuous rock outcrops in the walls of the depression permits to distinguish several unities belonging to different volcanoes, which have been destroyed by the collapse. The succession of the different volcanic centers shows a migration of the eruptive activity from East to West. The collapse took place only when the main activity was removed to the emplacement of the principal crater now in activity. This E-W direction of migration corresponds to one of the main fault directions of Etna; the collapse and the formation of the caldera is considered as the consequence of a violent pumice explosion or as the result of the westward migration of the magma along this fault.
Anti-counterfeiting with hardware intrinsic security
Counterfeiting of goods and electronic devices is a growing problem that has a huge economic impact on the electronics industry. Sometimes the consequences are even more dramatic, when critical systems start failing due to the use of counterfeit lower quality components. Hardware Intrinsic security (i.e. security systems built on the unique electronic fingerprint of a device) offers the potential to reduce the counterfeiting problem drastically. In this paper we will show how Hardware Intrinsic Security (HIS) can be used to prevent various forms of counterfeiting and over-production. HIS technology can also be used to bind software or user data to specific hardware devices, which provides additional security to both soft- and hardware vendors as well as consumers using HIS-enabled products. Besides showing the benefits of HIS, we will also provide an extensive overview of the results (both scientific and industrial) that Intrinsic-ID has achieved studying and implementing HIS.
Evolutionary training of hardware realizable multilayer perceptrons
The use of multilayer perceptrons (MLP) with threshold functions (binary step function activations) greatly reduces the complexity of the hardware implementation of neural networks, provides tolerance to noise and improves the interpretation of the internal representations. In certain case, such as in learning stationary tasks, it may be sufficient to find appropriate weights for an MLP with threshold activation functions by software simulation and, then, transfer the weight values to the hardware implementation. Efficient training of these networks is a subject of considerable ongoing research. Methods available in the literature mainly focus on two-state (threshold) nodes and try to train the networks by approximating the gradient of the error function and modifying appropriately the gradient descent, or by progressively altering the shape of the activation functions. In this paper, we propose an evolution-motivated approach, which is eminently suitable for networks with threshold functions and compare its performance with four other methods. The proposed evolutionary strategy does not need gradient related information, it is applicable to a situation where threshold activations are used from the beginning of the training, as in “on-chip” training, and is able to train networks with integer weights.
Managing engineers for success
To be successful as a manager of engineers, there are three parties that should benefit: Employer, Engineer, and the Manager. Without all three, long term success may not be assured. While the advice given in this paper could apply to managing a wide range of personnel, the focus will be placed on engineering personnel. The authors have spent most of their career working in electromagnetic compatibility positions that include: EMC Test Engineer, EMC Design Engineer, Lead Engineer and Laboratory Manager Roles. Over 25 combined years spent managing technicians and engineers. Much of what is covered in this paper is lessons learned from working for various managers and our own success and failures as managers.
Temperature rise after peginterferon alfa-2a injection in patients with chronic hepatitis C is associated with virological response and is modulated by IL28B genotype.
BACKGROUND & AIMS Interferon treatment for chronic hepatitis C is associated with non-specific symptoms including fever. We aimed to determine the association of temperature changes with interferon antiviral activity. METHODS 60 treatment-naïve patients with chronic hepatitis C (67% genotype 1/4/6, 33% genotype 2/3) were admitted to start peginterferon alfa-2a and ribavirin in a clinical trial. Temperature was measured at baseline and 3 times daily for the first 24h and the maximal increase from baseline during that time (ΔTmax) was determined. Serum HCV-RNA, interferon-gamma-inducible protein-10 (IP-10) and expression of interferon-stimulated genes (ISGs - CD274, ISG15, RSAD2, IRF7, CXCL10) in peripheral blood mononuclear cells (PBMCs) were measured at very early time points, and response kinetics calculated. The IL28B single nucleotide polymorphism, rs12979860, was genotyped. RESULTS Temperatures rose by 1.2±0.8°C, peaking after 12.5h. ΔTmax was strongly associated with 1st phase virological decline (r=0.59, p<0.0001) and was independent of gender, cirrhosis, viral genotype or baseline HCV-RNA. The association with 1st phase decline was seen in patients with rs12979860CC genotype (r = 0.65, p <0.0001) but not in CT/TT (r = 0.13, p = 0.53) and patients with CC genotype had a higher DTmax (1.4 ± 0.8 C vs. 0.8 ± 0.6 +C, p = 0.001). DTmax was associated with 6- and 24-h induction of serum IP-10 and of PBMC ISG expression, but only in patients with rs12979860CC [corrected].ΔTmax weakly predicted early virological response (AUC=0.68, CI 0.49-0.88). CONCLUSIONS Temperature rise following peginterferon injection is closely associated with virological response and is modulated by IL28B polymorphism, reflecting host interferon-responsiveness.
Estimating the population density of Mongolian gazelles Procapra gutturosa by driving long-distance transects
varied from 10.7 gazelles km−2 in spring to 11.5 gazelles km−2 in autumn, with total population estimates of 803,820 (483,790–1,330,100 95% confidence interval) and 870,625 (499,432–1,491,278 95% confidence interval), respectively. Confidence limits were wide, and to obtain a coefficient of variation of 20%, transect lengths would need to be extended threeto four-fold. Until more efficient means for conducting population surveys can be implemented, driving long-distance transects, combined with distance analysis, seem to provide the best quantitative estimate of Mongolian gazelle populations.
Diet quality and diet patterns in relation to circulating cardiometabolic biomarkers.
BACKGROUND & AIMS We examined the effects of diet quality and dietary patterns in relation to biomarkers of risk including leptin, soluble intracellular adhesion molecule 1 (sICAM-1), C-reactive protein (CRP), and irisin. METHODS We analyzed data from 196 adults cross-sectionally. Dietary patterns were identified by factor analysis and diet quality scores were generated using a validated food-frequency questionnaire. RESULTS Both the alternate healthy eating index-2010 (AHEI-2010) and the Dietary Approaches to Stop Hypertension (DASH) scores were negatively related to CRP, even after controlling for body mass index and total energy intake. Similarly, the prudent diet pattern was negatively related to leptin, sICAM-1, and CRP, whereas the Western diet pattern showed positive associations with these markers; however, after adjusting for all confounders, the associations only remained significant for leptin and sICAM-1. Irisin was positively associated with DASH and the prudent diet after controlling for all confounders (standardized β = 0.23, P = 0.030; standardized β = 0.25, P = 0.021, respectively). Irisin showed positive associations with increasing fruit consumption, whereas the levels of irisin decreased as meat consumption increased. CONCLUSIONS Irisin was directly associated with healthy diet types and patterns. Further studies regarding these mechanisms are warranted. This trial is registered at http://www.clinicaltrials.gov. Identifier: NCT01853332.
Compression of human body sequences using graph Wavelet Filter Banks
The next step in immersive communication beyond video from a single camera is object-based free viewpoint video, which is the capture and compression of a dynamic object such that it can be reconstructed and viewed from an arbitrary viewpoint. The moving human body is a particularly useful subclass of dynamic object for object-based free viewpoint video relevant to both telepresence and entertainment. In this paper, we compress moving human body sequences by applying recently developed Graph Wavelet Filter Banks to time-varying geometry and color signals living on a mesh representation of the human body. This model-based approach significantly outperforms state-of-the-art coding of the human body represented as ordinary depth plus color video sequences.
Effective and Scalable Authorship Attribution Using Function Words
Techniques for identifying the author of an unattributed document can be applied to problems in information analysis and in academic scholarship. A range of methods have been proposed in the research literature, using a variety of features and machine learning approaches, but the methods have been tested on very different data and the results cannot be compared. It is not even clear whether the differences in performance are due to feature selection or other variables. In this paper we examine the use of a large publicly available collection of newswire articles as a benchmark for comparing authorship attribution methods. To demonstrate the value of having a benchmark, we experimentally compare several recent feature-based techniques for authorship attribution, and test how well these methods perform as the volume of data is increased. We show that the benchmark is able to clearly distinguish between different approaches, and that the scalability of the best methods based on using function words features is acceptable, with only moderate decline as the difficulty of the problem is increased.
Evolutionary Explanations for Cooperation
Natural selection favours genes that increase an organism's ability to survive and reproduce. This would appear to lead to a world dominated by selfish behaviour. However, cooperation can be found at all levels of biological organisation: genes cooperate in genomes, organelles cooperate to form eukaryotic cells, cells cooperate to make multicellular organisms, bacterial parasites cooperate to overcome host defences, animals breed cooperatively, and humans and insects cooperate to build societies. Over the last 40 years, biologists have developed a theoretical framework that can explain cooperation at all these levels. Here, we summarise this theory, illustrate how it may be applied to real organisms and discuss future directions.
Brain imaging of language plasticity in adopted adults: can a second language replace the first?
Do the neural circuits that subserve language acquisition lose plasticity as they become tuned to the maternal language? We tested adult subjects born in Korea and adopted by French families in childhood; they have become fluent in their second language and report no conscious recollection of their native language. In behavioral tests assessing their memory for Korean, we found that they do not perform better than a control group of native French subjects who have never been exposed to Korean. We also used event-related functional magnetic resonance imaging to monitor cortical activations while the Korean adoptees and native French listened to sentences spoken in Korean, French and other, unknown, foreign languages. The adopted subjects did not show any specific activations to Korean stimuli relative to unknown languages. The areas activated more by French stimuli than by foreign stimuli were similar in the Korean adoptees and in the French native subjects, but with relatively larger extents of activation in the latter group. We discuss these data in light of the critical period hypothesis for language acquisition.
Evaluation of the effect of temperature , concentration and volume of serum complement on alternative complement pathway activity in koi carp ( Cyprinus carpio koi )
Azadeh Yektaseresht1*, Amin Gholamhosseini2 , Ali Janparvar3 1*Department of Pathobiology; School of Veterinary Medicine; Shiraz University, Shiraz; Iran. *Corresponding author: Azadeh Yektaseresht, Department of Pathobiology, School of Veterinary Medicine, Shiraz University, Shiraz, Iran 2Department of Aquatic Animal Health and Diseases; School of Veterinary Medicine; Shiraz University; Shiraz; Iran.
Genie: A new, fast, and outlier-resistant hierarchical clustering algorithm
The time needed to apply a hierarchical clustering algorithm is most often dominated by the number of computations of a pairwise dissimilarity measure. Such a constraint, for larger data sets, puts at a disadvantage the use of all the classical linkage criteria but the single linkage one. However, it is known that the single linkage clustering algorithm is very sensitive to outliers, produces highly skewed dendrograms, and therefore usually does not reflect the true underlying data structure – unless the clusters are well-separated. To overcome its limitations, we propose a new hierarchical clustering linkage criterion called Genie. Namely, our algorithm links two clusters in such a way that a chosen economic inequity measure (e.g., the Ginior Bonferroni-index) of the cluster sizes does not increase drastically above a given threshold. The presented benchmarks indicate a high practical usefulness of the introduced method: it most often outperforms the Ward or average linkage in terms of the clustering quality while retaining the single linkage speed. The Genie algorithm is easily parallelizable and thus may be run on multiple threads to speed up its execution further on. Its memory overhead is small: there is no need to precompute the complete distance matrix to perform the computations in order to obtain a desired clustering. It can be applied on arbitrary spaces equipped with a dissimilarity measure, e.g., on real vectors, DNA or protein sequences, images, rankings, informetric data, etc. A reference implementation of the algorithm has been included in the open source genie package for R. Please cite this paper as: Gagolewski M., Bartoszuk M., Cena A., Genie: A new, fast, and outlier-resistant hierarchical clustering algorithm, Information Sciences 363, 2016, pp. 8–23, doi:10.1016/j.ins.2016.05.003.
Microbial communication leading to the activation of silent fungal secondary metabolite gene clusters
Microorganisms form diverse multispecies communities in various ecosystems. The high abundance of fungal and bacterial species in these consortia results in specific communication between the microorganisms. A key role in this communication is played by secondary metabolites (SMs), which are also called natural products. Recently, it was shown that interspecies "talk" between microorganisms represents a physiological trigger to activate silent gene clusters leading to the formation of novel SMs by the involved species. This review focuses on mixed microbial cultivation, mainly between bacteria and fungi, with a special emphasis on the induced formation of fungal SMs in co-cultures. In addition, the role of chromatin remodeling in the induction is examined, and methodical perspectives for the analysis of natural products are presented. As an example for an intermicrobial interaction elucidated at the molecular level, we discuss the specific interaction between the filamentous fungi Aspergillus nidulans and Aspergillus fumigatus with the soil bacterium Streptomyces rapamycinicus, which provides an excellent model system to enlighten molecular concepts behind regulatory mechanisms and will pave the way to a novel avenue of drug discovery through targeted activation of silent SM gene clusters through co-cultivations of microorganisms.
Project communication management patterns
In present, dynamically developing organizations, that often realize business tasks using the project-based approach, effective project management is of paramount importance. Numerous reports and scientific papers present lists of critical success factors in project management, and communication management is usually at the very top of the list. But even though the communication practices are found to be associated with most of the success dimensions, they are not given enough attention and the communication processes and practices formalized in the company's project management methodology are neither followed nor prioritized by project managers. This paper aims at supporting project managers and teams in more effective implementation of best practices in communication management by proposing a set of communication management patterns, which promote a context-problem-solution approach to communication management in projects.
AN MR BRAIN IMAGES CLASSIFIER VIA PRINCIPAL COMPONENT ANALYSIS AND KERNEL SUPPORT
Automated and accurate classification of MR brain images is extremely important for medical analysis and interpretation. Over the last decade numerous methods have already been proposed. In this paper, we presented a novel method to classify a given MR brain image as normal or abnormal. The proposed method first employed wavelet transform to extract features from images, followed by applying principle component analysis (PCA) to reduce the dimensions of features. The reduced features were submitted to a kernel support vector machine (KSVM). The strategy of Kfold stratified cross validation was used to enhance generalization of KSVM. We chose seven common brain diseases (glioma, meningioma, Alzheimer’s disease, Alzheimer’s disease plus visual agnosia, Pick’s disease, sarcoma, and Huntington’s disease) as abnormal brains, and collected 160 MR brain images (20 normal and 140 abnormal) from Harvard Medical School website. We performed our proposed methods with four different kernels, and found that the GRB kernel achieves the highest classification accuracy as 99.38%. The LIN, HPOL, and IPOL kernel achieves 95%, 96.88%, and 98.12%, respectively. We also compared our method to those from literatures in the last decade, and the results showed our DWT+PCA+KSVM with GRB kernel still achieved the best accurate classification results. The averaged processing time for a 256× 256 size image on a laptop of P4 IBM with 3GHz processor and 2 GB RAM is 0.0448 s. From the experimental data, our method was effective and rapid. It could be applied to the field of MR brain image classification and can assist the doctors to diagnose where a patient is normal or abnormal to certain degrees. Received 14 June 2012, Accepted 23 July 2012, Scheduled 19 August 2012 * Corresponding author: Yudong Zhang ([email protected]).
Coordinated charging of multiple plug-in hybrid electric vehicles in residential distribution grids
Alternative vehicles based on internal combustion engines (ICE), such as the hybrid electric vehicle (HEV), the plug-in hybrid electric vehicle (PHEV) and the fuel-cell electric vehicle (FCEV), are becoming increasingly popular. HEVs are currently commercially available and PHEVs will be the next phase in the evolution of hybrid and electric vehicles. The batteries of the PHEVs are designed to be charged at home, from a standard outlet in the garage, or on a corporate car park. The electrical consumption for charging PHEVs may take up to 5% of the total electrical consumption in Belgium by 2030. These extra electrical loads have an impact on the distribution grid which is analyzed in terms of power losses and voltage deviations. Firstly, the uncoordinated charging is described where the vehicles are charged immediately when they are plugged in or after a fixed start delay. This uncoordinated power consumption on a local scale can lead to grid problems. Therefore coordinated charging is proposed to minimize the power losses and to maximize the main grid load factor. The optimal charge profile of the PHEVs is computed by minimizing the power losses. The exact forecasting of household loads is not possible, so stochastic programming is introduced.
Investigation of the Effects of Demographic Factors and Brand Perception on the Purchase Intention of Luxury Automobiles in Iranian Consumers
The purpose of this paper is to investigate Iranian consumers' perception of German made Mercedes Benz and Japanese made Lexus luxury automobiles brands, that include five perceived values: conspicuousness, uniqueness, social, hedonic and quality on purchase intention. Also, the effects of demographic factors, like identifying the luxury brand preference on purchase intention of luxury automobiles in Iranian consumers, while comparing between them and inferring management implications from implementing data analysis. Based on a thorough review of the literature, 16 hypotheses are derived and tested using Structural Equation Modeling (SEM) and Confirmatory Factor Analysis (CFA), measures ANOVA also was used to analyze the relationship between demographic and Iranian consumers luxury brand perception. The final sample consists of a total of 390 participants. The main findings show that variables of hedonic, uniqueness and quality value were significantly higher than conspicuous and social values. They have more of a role in forming of luxury brand perception in Iranian Consumers. This study is useful for marketers to understand their target market and how their customers evaluate products and make buying decisions. The five perceived values of luxury automobiles can be used as guidelines for salesmen to sell successfully to customers.
How far did we get in face spoofing detection?
The growing use of control access systems based on face recognition shed light over the need for even more accurate systems to detect face spoofing attacks. In this paper, an extensive analysis on face spoofing detection works published in the last decade is presented. The analyzed works are categorized by their fundamental parts, i.e., descriptors and classifiers. This structured survey also brings a comparative performance analysis of the works considering the most important public data sets in the field. The methodology followed in this work is particularly relevant to observe temporal evolution of the field, trends in the existing approaches, Corresponding author: Luciano Oliveira, tel. +55 71 3283-9472 Email addresses: [email protected] (Luiz Souza), [email protected] (Luciano Oliveira), [email protected] (Mauricio Pamplona), [email protected] (Joao Papa) to discuss still opened issues, and to propose new perspectives for the future of face spoofing detection.
Showrooming ” and the Competition between Store and Online Retailers
Customers often evaluate products at brick-and-mortar stores to identify their “best fit” product, but end up buying this product not at the store but at a competing online retailer to take advantage of lower prices. This free-riding behavior by customers is referred to as “showrooming.” We analyze three strategies to counter the effect of showrooming that may improve profits for the brick-and-mortar stores: (a) price matching, (b) making product matching harder between the brick-and-mortar store and the online retailer, and (c) charging customers for showrooming. We show that only the last two strategies may improve profits of the brick-and-mortar stores. We also present an analysis to illustrate when a particular strategy, (b) or (c), does better than the other.
Human-like social skills in dogs?
Domestic dogs are unusually skilled at reading human social and communicative behavior--even more so than our nearest primate relatives. For example, they use human social and communicative behavior (e.g. a pointing gesture) to find hidden food, and they know what the human can and cannot see in various situations. Recent comparisons between canid species suggest that these unusual social skills have a heritable component and initially evolved during domestication as a result of selection on systems mediating fear and aggression towards humans. Differences in chimpanzee and human temperament suggest that a similar process may have been an important catalyst leading to the evolution of unusual social skills in our own species. The study of convergent evolution provides an exciting opportunity to gain further insights into the evolutionary processes leading to human-like forms of cooperation and communication.
Tax Evasion and Financial Development under Asymmetric Information in Credit Markets
Recent empirical studies have documented that the incidence of firms' tax evasion on their sales is negatively correlated with the country's level of financial development. Our analysis shows that this stylized fact can be theoretically accounted for within a small-open-economy model of optimal tax enforcement under asymmetric information in credit markets. In an economy with a more developed financial sector that exhibits smaller agency costs, we find that the government will raise its optimal probability of tax auditing, which in turn leads to more tax compliance. It follows that financial development and tax evasion are inversely related, as observed in the actual data.
Study on Bluetooth technology and its access to BACnet
Bluetooth technology is now widely used in wireless application systems; however, wireless access service is not involved in BACnet. This paper proposes a design of Bluetooth/BACnet gateway after analyzing the architecture of Bluetooth and BACnet, and gives the implementation of the gateway based on MCU LPC2210(ARM7).
Learning Features for Offline Handwritten Signature Verification using Deep Convolutional Neural Networks
Verifying the identity of a person using handwritten signatures is challenging in the presence of skilled forgeries, where a forger has access to a person’s signature and deliberately attempt to imitate it. In offline (static) signature verification, the dynamic information of the signature writing process is lost, and it is difficult to design good feature extractors that can distinguish genuine signatures and skilled forgeries. This reflects in a relatively poor performance, with verification errors around 7% in the best systems in the literature. To address both the difficulty of obtaining good features, as well as improve system performance, we propose learning the representations from signature images, in a Writer-Independent format, using Convolutional Neural Networks. In particular, we propose a novel formulation of the problem that includes knowledge of skilled forgeries from a subset of users in the feature learning process, that aims to capture visual cues that distinguish genuine signatures and forgeries regardless of the user. Extensive experiments were conducted on four datasets: GPDS, MCYT, CEDAR and Brazilian PUC-PR datasets. On GPDS-160, we obtained a large improvement in state-of-the-art performance, achieving 1.72% Equal Error Rate, compared to 6.97% in the literature. We also verified that the features generalize beyond the GPDS dataset, surpassing the state-of-the-art performance in the other datasets, without requiring the representation to be fine-tuned to each particular dataset.
Active defence through deceptive IPS
Modern security mechanisms such as Unified Threat Management (UTM), Next-Generation Firewalls and Security Information and Event Management (SIEM) have become more sophisticated over recent years, promising advanced security features and immediate mitigation of the most advanced threats. While this appears promising, in practice even this cutting-edge technology often fails to protect modern organisations as they are being targeted by attacks that were previously unknown to the security industry. Most security mechanisms are based on a database of previously known attack artefacts (signatures) and they will fail on slightly modified or new attacks. The need for threat intelligence is in complete contrast with the way current security solutions are responding to the threats they identify, as they immediately block them without attempting to acquire any further information. In this report, we present and evaluate a security mechanism that operates as an intrusion prevention system which uses honeypots to deceive an attacker, prevent a security breach and which allows the potential acquisition of intelligence on each intrusion attempt. a aThis article is published online by Computer Weekly as part of the 2017 Royal Holloway information security thesis series http://www.computerweekly.com/ehandbook/Active-defence-through-deceptive-IPS. It is based on an MSc dissertation written as part of the MSc in Information Security at the ISG, Royal Holloway, University of London. The full thesis is published on the ISG’s website at https://www.royalholloway.ac.uk/isg/.
Formal Semantics and Analysis of BPMN Process Models using Petri Nets ?
The Business Process Modelling Notation (BPMN) is a standard for capturing business processes in the early phases of systems development. The mix of constructs found in BPMN makes it possible to obtain models with a range of semantic errors. The ability to statically check the semantic correctness of models is thus a desirable feature for modelling tools based on BPMN. However, the static analysis of BPMN models is hindered by ambiguities in the standard specification and the complexity of the language. The fact that BPMN integrates constructs from graph-oriented process definition languages with features for concurrent execution of multiple instances of a subprocess and exception handling, makes it challenging to provide a formal semantics of BPMN. Even more challenging is to define a semantics that can be used to analyse BPMN models. This paper proposes a formal semantics of BPMN defined in terms of a mapping to Petri nets, for which efficient analysis techniques exist. The proposed mapping has been implemented as a tool that generates code in the Petri Net Markup Language. This formalisation exercise has also led to the identification of a number of deficiencies in the BPMN standard specification.
The international standards for neurological classification of spinal cord injury: relationship between S4-5 dermatome testing and anorectal testing
Study design:Prospective cross-sectional multicenter study.Objective:To evaluate the correlation, sensitivity, specificity and predictive values of S4-5 dermatome and the anorectal examination for determination of sacral sparing in the International Standards for Neurological Classification of Spinal Cord Injury (ISNCSCI) examination.Setting:Two tertiary hospitals that specialize in pediatric spinal cord injuries.Methods:In all, 189 patients who were at minimum 3 month after spinal cord injury participated in complete ISNCSCI examinations. All examiners completed training for the proper completion of the ISNCSCI examination. Correlations and sensitivity/specificity analyses were conducted between S4-5 dermatome testing and the anorectal examination. Results were analyzed by age of patient, examiner, tetraplegia/paraplegia classification and injury level (T10-S3, L1-S3 and S3).Results:The correlation between S4-5 dermatome and anorectal sensation was moderate (0.62, P<0.001). Using the anorectal examination as the gold standard, the sensitivity of S4-5 testing was 0.60 (0.49, 70) and specificity was 0.96 (0.90, 0.99). No single age group, tester, level, or type of injury differed from the overall result.Conclusion:In the pediatric population, the correlation between S4-5 and anorectal sensation was lower than anticipated. The sensitivity of 0.62 for S4-5 testing and diminished sensation between T10 and S3 suggests that anorectal testing may either be a more sensitive representation of S4-5 function or activate an alternative neuronal pathway that is perceived by the patient. Further investigation into the validity of the sacral sparing components of the ISNCSCI examination is warranted.
May some HCV genotype 1 patients still benefit from dual therapy? The role of very early HCV kinetics.
When treating HCV patients with conventional dual therapy in the current context of rapidly evolving HCV therapy, outcome prediction is crucial and HCV kinetics, as early as 48 hours after the start of treatment, may play a major role. We aimed at clarifying the role of HCV very early kinetics. We consecutively enrolled mono-infected HCV patients at 7 treatment sites in Central Italy and evaluated the predictive value of logarithmic decay of HCV RNA 48 hours after the start of dual therapy (Delta48). Among the 171 enrolled patients, 144 were evaluable for early and sustained virological response (EVR, SVR) prediction; 108 (75.0%) reached EVR and 84 (58.3%) reached SVR. Mean Delta 48 was 1.68 ± 1.22 log10 IU/ml, being higher in patients with SVR and EVR. Those genotype-1 patients experiencing a Delta 48 >2 logs showed a very high chance of success (100% positive predictive value), even in the absence of rapid virological response (RVR). Evaluation of very early HCV kinetics helped identify a small but significant proportion of genotype-1 patients (close to 10%) in addition to those identified with RVR, who could be treated with dual therapy in spite of not reaching RVR. In the current European context, whereby sustainability of HCV therapy is a crucial issue, conventional dual therapy may still play a reasonable role in patients with good tolerance and early prediction of success.
Intelligent risk management tools for software development
Software tools have been used in software development for a long time now. They are used for, among other things, performance analysis, testing and verification, debugging and building applications. Software tools can be very simple and lightweight, e.g. linkers, or very large and complex, e.g. computer-assisted software engineering (CASE) tools and integrated development environments (IDEs). Some tools support particular phases of the project cycle while others can be used with a speicfic software development model or technology. Some aspects of software development, like risk management, are done throughout the whole project from inception to commissioning. The aim of this paper is to demonstrate the need for an intelligent risk assessment and management tool for both agile or traditional (or their combination) methods in software development. The authors propose a model, whose development is subject of further research, which can be investigated for use in developing intelligent risk management tools