title
stringlengths 8
300
| abstract
stringlengths 0
10k
|
---|---|
Neurobiological mechanisms involved in sleep bruxism. | Sleep bruxism (SB) is reported by 8% of the adult population and is mainly associated with rhythmic masticatory muscle activity (RMMA) characterized by repetitive jaw muscle contractions (3 bursts or more at a frequency of 1 Hz). The consequences of SB may include tooth destruction, jaw pain, headaches, or the limitation of mandibular movement, as well as tooth-grinding sounds that disrupt the sleep of bed partners. SB is probably an extreme manifestation of a masticatory muscle activity occurring during the sleep of most normal subjects, since RMMA is observed in 60% of normal sleepers in the absence of grinding sounds. The pathophysiology of SB is becoming clearer, and there is an abundance of evidence outlining the neurophysiology and neurochemistry of rhythmic jaw movements (RJM) in relation to chewing, swallowing, and breathing. The sleep literature provides much evidence describing the mechanisms involved in the reduction of muscle tone, from sleep onset to the atonia that characterizes rapid eye movement (REM) sleep. Several brainstem structures (e.g., reticular pontis oralis, pontis caudalis, parvocellularis) and neurochemicals (e.g., serotonin, dopamine, gamma aminobutyric acid [GABA], noradrenaline) are involved in both the genesis of RJM and the modulation of muscle tone during sleep. It remains unknown why a high percentage of normal subjects present RMMA during sleep and why this activity is three times more frequent and higher in amplitude in SB patients. It is also unclear why RMMA during sleep is characterized by co-activation of both jaw-opening and jaw-closing muscles instead of the alternating jaw-opening and jaw-closing muscle activity pattern typical of chewing. The final section of this review proposes that RMMA during sleep has a role in lubricating the upper alimentary tract and increasing airway patency. The review concludes with an outline of questions for future research. |
Combining Retrieval, Statistics, and Inference to Answer Elementary Science Questions | What capabilities are required for an AI system to pass standard 4th Grade Science Tests? Previous work has examined the use of Markov Logic Networks (MLNs) to represent the requisite background knowledge and interpret test questions, but did not improve upon an information retrieval (IR) baseline. In this paper, we describe an alternative approach that operates at three levels of representation and reasoning: information retrieval, corpus statistics, and simple inference over a semi-automatically constructed knowledge base, to achieve substantially improved results. We evaluate the methods on six years of unseen, unedited exam questions from the NY Regents Science Exam (using only non-diagram, multiple choice questions), and show that our overall system’s score is 71.3%, an improvement of 23.8% (absolute) over the MLN-based method described in previous work. We conclude with a detailed analysis, illustrating the complementary strengths of each method in the ensemble. Our datasets are being released to enable further research. |
Aircraft Control via Variable Cant-Angle Winglets | This paper investigates a novel method for the control of “morphing” aircraft. The concept consists of a pair of winglets with adjustable cant angle, independently actuated and mounted at the tips of a baseline flying wing. The general philosophybehind the conceptwas that for specificflight conditions such as a coordinated turn, the use of two control devices would be sufficient for adequate control. Computations with a vortex lattice model and subsequent wind-tunnel tests demonstrate the viability of the concept, with individual and/or dual winglet deflection producing multi-axis coupled control moments. Comparisons between the experimental and computational results showed reasonable to good agreement, with the major discrepancies thought to be due to wind-tunnel model aeroelastic effects. |
RESERVOIR COMPUTING: A POWERFUL BLACK-BOX FRAMEWORK FOR NONLINEAR AUDIO PROCESSING | This paper proposes reservoir computing as a general framework for nonlinear audio processing. Reservoir computing is a novel approach to recurrent neural network training with the advantage of a very simple and linear learning algorithm. It can in theory approximate arbitrary nonlinear dynamical systems with arbitrary precision, has an inherent temporal processing capability and is therefore well suited for many nonlinear audio processing problems. Always when nonlinear relationships are present in the data and time information is crucial, reservoir computing can be applied. Examples from three application areas are presented: nonlinear system identification of a tube amplifier emulator algorithm, nonlinear audio prediction, as necessary in a wireless transmission of audio where dropouts may occur, and automatic melody transcription out of a polyphonic audio stream, as one example from the big field of music information retrieval. Reservoir computing was able to outperform state-of-the-art alternative models in all studied tasks. |
Self-Interest , Altruism , Incentives , and Agency Theory | Many people are suspicious of self-interest and incentives and oppose motivating humans through incentives. I analyze the meaning of incentives in the logic of choice and argue that it is inconceivable that purposeful actions are anything other than responses to incentives. Money is not always the best way to motivate people. But where money incentives are required, they are required precisely because people are motivated by things other than money. Self-interest is consistent with altruistic motives. Agency problems, however, cannot be solved by instilling greater altruism in people because altruism, the concern for the well-being of others, does not make a person into a perfect agent who does the bidding of others. I discuss the universal tendency of people to behave in non-rational ways. Though they are Resourceful, Evaluative Maximizers (REMMs) humans are imperfect because their brains are biologically structured so as to blind them from perceiving and correcting errors that cause psychic pain. The result is systematic, non-functional behavior. I discuss a Pain Avoidance Model (PAM) that complements REMM by capturing the non-rational component of human behavior (the crux of human self-control problems). Recognizing these selfcontrol problems leads to an expansion of agency theory since they are a second source of agency costs in addition to those generated by conflicts of interest between people. © M. C. Jensen, August 31, 1994 Forthcoming in the Journal of Applied Corporate Finance (Summer 1994) Comments welcome August 31, 1994 Self-Interest, Altruism, Incentives, and Agency Theory |
Evaluating the language resources of chatbots for their potential in english as a second language | This paper investigates the linguistic worth of current ‘chatbot’ programs – software programs which attempt to hold a conversation, or interact, in English – as a precursor to their potential as an ESL (English as a second language) learning resource. After some initial background to the development of chatbots, and a discussion of the Loebner Prize Contest for the most ‘human’ chatbot (the ‘Turing Test’), the paper describes an in-depth study evaluating the linguistic accuracy of a number of chatbots available online. Since the ultimate purpose of the current study concerns chatbots’ potential with ESL learners, the analysis of language embraces not only an examination of features of language from a native-speaker’s perspective (the focus of the Turing Test), but also aspects of language from a second-language-user’s perspective. Analyses indicate that while the winner of the 2005 Loebner Prize is the most able chatbot linguistically, it may not necessarily be the chatbot most suited to ESL learners. The paper concludes that while substantial progress has been made in terms of chatbots’ language-handling, a robust ESL ‘conversation practice machine’ (Atwell, 1999) is still some way off being a reality. |
The Jalview Java alignment editor | Multiple sequence alignment remains a crucial method for understanding the function of groups of related nucleic acid and protein sequences. However, it is known that automatic multiple sequence alignments can often be improved by manual editing. Therefore, tools are needed to view and edit multiple sequence alignments. Due to growth in the sequence databases, multiple sequence alignments can often be large and difficult to view efficiently. The Jalview Java alignment editor is presented here, which enables fast viewing and editing of large multiple sequence alignments. |
Comparison study of non-orthogonal multiple access schemes for 5G | With the development of mobile Internet and Internet of things (IoT), the 5th generation (5G) wireless communications will foresee explosive increase in mobile traffic. To address challenges in 5G such as higher spectral efficiency, massive connectivity, and lower latency, some non-orthogonal multiple access (NOMA) schemes have been recently actively investigated, including power-domain NOMA, multiple access with low-density spreading (LDS), sparse code multiple access (SCMA), multiuser shared access (MUSA), pattern division multiple access (PDMA), etc. Different from conventional orthogonal multiple access (OMA) schemes, NOMA can realize overloading by introducing some controllable interferences at the cost of slightly increased receiver complexity, which can achieve significant gains in spectral efficiency and accommodate much more users. In this paper, we will discuss basic principles and key features of three typical NOMA schemes, i.e., SCMA, MUSA, and PDMA. What's more, their performance in terms of uplink bit error rate (BER) will be compared. Simulation results show that in typical Rayleigh fading channels, SCMA has the best performance, while the BER performance of MUSA and PDMA are very close to each other. In addition, we also analyze the performance of PDMA using the same factor graph as SCMA, which indicates that the performance gain of SCMA over PDMA comes from both the difference of factor graph and the codebook optimization. |
hyperdoc2vec: Distributed Representations of Hypertext Documents | Hypertext documents, such as web pages and academic papers, are of great importance in delivering information in our daily life. Although being effective on plain documents, conventional text embedding methods suffer from information loss if directly adapted to hyper-documents. In this paper, we propose a general embedding approach for hyper-documents, namely, hyperdoc2vec, along with four criteria characterizing necessary information that hyper-document embedding models should preserve. Systematic comparisons are conducted between hyperdoc2vec and several competitors on two tasks, i.e., paper classification and citation recommendation, in the academic paper domain. Analyses and experiments both validate the superiority of hyperdoc2vec to other models w.r.t. the four criteria. |
Measuring the effects of business intelligence systems: The relationship between business process and organizational performance | Article history: Received 1 July 2007 Received in revised form 15 March 2008 Accepted 20 March 2008 Business intelligence (BI) systems provide the ability to analyse business information in order to support and improve management decision making across a broad range of business activities. They leverage the large data infrastructure investments (e.g. ERP systems) made by firms, and have the potential to realise the substantial value locked up in a firm's data resources. While substantial business investment in BI systems is continuing to accelerate, there is a complete absence of a specific and rigorous method to measure the realised business value, if any. By exploiting the lessons learned from prior attempts to measure business valueof IT-intensive systems,wedevelopa newmeasure that is based on an understanding of the characteristics of BI systems in a process-oriented framework. We then employ the measure in an examination of the relationship between the business process performance and organizational performance, finding significant differences in the strength of the relationship between industry sectors. This study reinforces the need to consider the specific context of use when designing performance measurement for IT-intensive systems, and highlights the need for further research examining contextual moderators to the realisation of such performance benefits. Crown Copyright © 2008 Published by Elsevier Inc. All rights reserved. |
Outcome and prognostic factors in high-risk childhood adrenocortical carcinomas: A report from the European Cooperative Study Group on Pediatric Rare Tumors (EXPeRT). | OBJECTIVES
The aim of this retrospective international analysis was to evaluate the role of risk factors in pediatric patients with adrenocortical carcinoma (ACC) observed in European countries (2000-2013) in an attempt to identify factors associated with poor prognosis.
PROCEDURES
Data were retrieved from databases of Germany, France, Poland, and Italy, which form the European Cooperative Study Group on Pediatric Rare Tumors (EXPeRT). Patients were less than 18 years old, with at least one of the following tumor-related risk factors: metastases, volume more than 200 cm3 , Cushing syndrome, vascular or regional lymph node invasion, initial biopsy, or incomplete excision. Role of patients' age was also evaluated.
RESULTS
Eighty-two patients were evaluated: 62 with localized disease and 20 with metastases. The 3-year progression-free survival (PFS) and overall survival (OS) were 39% and 55% for the whole population, respectively, and 51% and 73% for localized diseases, respectively. Concerning the whole population, PFS and OS were influenced by distant metastases, tumor volume, lymph node involvement, age, and presence of two or more risk factors. Factors significant only at OS were vascular involvement and incomplete surgery. At multivariable analysis, the main factors at PFS were volume more than 200 cm3 (hazard ratio [HR]: 2.6, 95% confidence interval [CI]: 1.18-5.70) and presence of distant metastases (HR: 8.26, 95% CI: 3.49-19.51). The OS was significantly influenced by the presence of metastases (P < 0.0001). Concerning patients with localized tumors, the only significant prognostic factor was volume more than 200 cm3 with a HR of 4.38 (95% CI: 1.60-12.00) for PFS and of 3.68 (95% CI: 1.02-13.30) for OS.
CONCLUSIONS
Distant metastases and large tumor volume were the main unfavorable prognostic factors. Presence of two or more factors related to ACC was associated with an aggressive behavior of disease. |
Active oral regimen for elderly adults with newly diagnosed acute myelogenous leukemia: a preclinical and phase 1 trial of the farnesyltransferase inhibitor tipifarnib (R115777, Zarnestra) combined with etoposide. | The farnesyltransferase inhibitor tipifarnib exhibits modest activity against acute myelogenous leukemia. To build on these results, we examined the effect of combining tipifarnib with other agents. Tipifarnib inhibited signaling downstream of the farnesylated small G protein Rheb and synergistically enhanced etoposide-induced antiproliferative effects in lymphohematopoietic cell lines and acute myelogenous leukemia isolates. We subsequently conducted a phase 1 trial of tipifarnib plus etoposide in adults over 70 years of age who were not candidates for conventional therapy. A total of 84 patients (median age, 77 years) received 224 cycles of oral tipifarnib (300-600 mg twice daily for 14 or 21 days) plus oral etoposide (100-200 mg daily on days 1-3 and 8-10). Dose-limiting toxicities occurred with 21-day tipifarnib. Complete remissions were achieved in 16 of 54 (30%) receiving 14-day tipifarnib versus 5 of 30 (17%) receiving 21-day tipifarnib. Complete remissions occurred in 50% of two 14-day tipifarnib cohorts: 3A (tipifarnib 600, etoposide 100) and 8A (tipifarnib 400, etoposide 200). In vivo, tipifarnib plus etoposide decreased ribosomal S6 protein phosphorylation and increased histone H2AX phosphorylation and apoptosis. Tipifarnib plus etoposide is a promising orally bioavailable regimen that warrants further evaluation in elderly adults who are not candidates for conventional induction chemotherapy. These clinical studies are registered at www.clinicaltrials.gov as #NCT00112853. |
White striping and woody breast myopathies in the modern poultry industry: a review. | Myopathies are gaining the attention of poultry meat producers globally. White Striping (WS) is a condition characterized by the occurrence of white striations parallel to muscle fibers on breast, thigh, and tender muscles of broilers, while Woody Breast (WB) imparts tougher consistency to raw breast fillets. Histologically, both conditions have been characterized with myodegeneration and necrosis, fibrosis, lipidosis, and regenerative changes. The occurrence of these modern myopathies has been associated with increased growth rate in birds. The severity of the myopathies can adversely affect consumer acceptance of raw cut up parts and/or quality of further processed poultry meat products, resulting in huge economic loss to the industry. Even though gross and/or histologic characteristics of modern myopathies are similar to some of the known conditions, such as hereditary muscular dystrophy, nutritional myopathy, toxic myopathies, and marbling, WS and WB could have a different etiology. As a result, there is a need for future studies to identify markers for WS and WB in live birds and genetic, nutritional, and/or management strategies to alleviate the condition. |
A prototype force sensing unit for a capacitive-type force-torque sensor | Force sensing is a crucial task for robots, especially when the end effectors such as fingers and hands need to interact with an unknown environment, for example in a humanoid robot. In order to sense such forces, a force/torque sensor is an essential component. Many available force/torque sensors are based on strain gauges, but other sensing principles are also possible. In this paper we describe steps towards a capacitive type based sensor. Several MEMS capacitive sensors are described in the literature; however very few larger sensors are available, as capacitive sensors usually have disadvantages such as severe hysteresis and temperature sensitivity. On the other hand, capacitive sensors have the advantage of the availability of small sized chips for sensor readout and digitization. We employ copper beryllium for the transducer, which has been modified from the ones described in the literature to be able to be used in a small sized, robust force/torque sensor. Therefore, as the first step toward the goal of building such a sensor, in this study we have created a prototype sensing unit and have tested its sensitivity. No viscoelastic materials are used for the sensing unit, which usually introduce severe hysteresis in capacitive sensors. We have achieved a high signal-to-noise ratio, high sensitivity and a range of 10 Newton. |
Creation and Evaluation of emotion expression with body movement, sound and eye color for humanoid robots | The ability to display emotions is a key feature in human communication and also for robots that are expected to interact with humans in social environments. For expressions based on Body Movement and other signals than facial expressions, like Sound, no common grounds have been established so far. Based on psychological research on human expression of emotions and perception of emotional stimuli we created eight different expressional designs for the emotions Anger, Sadness, Fear and Joy, consisting of Body Movements, Sounds and Eye Colors. In a large pre-test we evaluated the recognition ratios for the different expressional designs. In our main experiment we separated the expressional designs into their single cues (Body Movement, Sound, Eye Color) and evaluated their expressivity. The detailed view at the perception of our expressional cues, allowed us to evaluate the appropriateness of the stimuli, check our implementations for flaws and build a basis for systematical revision. Our analysis revealed that almost all Body Movements were appropriate for their target emotion and that some of our Sounds need a revision. Eye Colors could be identified as an unreliable component for emotional expression. |
Hardware Trojan Attacks: Threat Analysis and Countermeasures | Security of a computer system has been traditionally related to the security of the software or the information being processed. The underlying hardware used for information processing has been considered trusted. The emergence of hardware Trojan attacks violates this root of trust. These attacks, in the form of malicious modifications of electronic hardware at different stages of its life cycle, pose major security concerns in the electronics industry. An adversary can mount such an attack with an objective to cause operational failure or to leak secret information from inside a chip-e.g., the key in a cryptographic chip, during field operation. Global economic trend that encourages increased reliance on untrusted entities in the hardware design and fabrication process is rapidly enhancing the vulnerability to such attacks. In this paper, we analyze the threat of hardware Trojan attacks; present attack models, types, and scenarios; discuss different forms of protection approaches, both proactive and reactive; and describe emerging attack modes, defenses, and future research pathways. |
An Introduction to Nonlinear Dimensionality Reduction by Maximum Variance Unfolding | Many problems in AI are simplified by clever representations of sensory or symbolic input. How to discover such representations automatically, from large amounts of unlabeled data, remains a fundamental challenge. The goal of statistical methods for dimensionality reduction is to detect and discover low dimensional structure in high dimensional data. In this paper, we review a recently proposed algorithm— maximum variance unfolding—for learning faithful low dimensional representations of high dimensional data. The algorithm relies on modern tools in convex optimization that are proving increasingly useful in many areas of machine learning. |
Evolutionary computation: comments on the history and current state | Evolutionary computation has started to receive significant attention during the last decade, although the origins can be traced back to the late 1950’s. This article surveys the history as well as the current state of this rapidly growing field. We describe the purpose, the general structure, and the working principles of different approaches, including genetic algorithms (GA) [with links to genetic programming(GP) and classifier systems (CS)], evolution strategies(ES), and evolutionary programming (EP) by analysis and comparison of their most important constituents (i.e., representations, variation operators, reproduction, and selection mechanism). Finally, we give a brief overview on the manifold of application domains, although this necessarily must remain incomplete. |
Structured Aspect Extraction | Aspect extraction identifies relevant features of an entity from a textual description and is typically targeted to product reviews, and other types of short text, as an enabling task for, e.g., opinion mining and information retrieval. Current aspect extraction methods mostly focus on aspect terms, often neglecting associated modifiers or embedding them in the aspect terms without proper distinction. Moreover, flat syntactic structures are often assumed, resulting in inaccurate extractions of complex aspects. This paper studies the problem of structured aspect extraction, a variant of traditional aspect extraction aiming at a fine-grained extraction of complex (i.e., hierarchical) aspects. We propose an unsupervised and scalable method for structured aspect extraction consisting of statistical noun phrase clustering, cPMI-based noun phrase segmentation, and hierarchical pattern induction. Our evaluation shows a substantial improvement over existing methods in terms of both quality and computational efficiency. |
Combined Effects of Time Spent in Physical Activity, Sedentary Behaviors and Sleep on Obesity and Cardio-Metabolic Health Markers: A Novel Compositional Data Analysis Approach | The associations between time spent in sleep, sedentary behaviors (SB) and physical activity with health are usually studied without taking into account that time is finite during the day, so time spent in each of these behaviors are codependent. Therefore, little is known about the combined effect of time spent in sleep, SB and physical activity, that together constitute a composite whole, on obesity and cardio-metabolic health markers. Cross-sectional analysis of NHANES 2005-6 cycle on N = 1937 adults, was undertaken using a compositional analysis paradigm, which accounts for this intrinsic codependence. Time spent in SB, light intensity (LIPA) and moderate to vigorous activity (MVPA) was determined from accelerometry and combined with self-reported sleep time to obtain the 24 hour time budget composition. The distribution of time spent in sleep, SB, LIPA and MVPA is significantly associated with BMI, waist circumference, triglycerides, plasma glucose, plasma insulin (all p<0.001), and systolic (p<0.001) and diastolic blood pressure (p<0.003), but not HDL or LDL. Within the composition, the strongest positive effect is found for the proportion of time spent in MVPA. Strikingly, the effects of MVPA replacing another behavior and of MVPA being displaced by another behavior are asymmetric. For example, re-allocating 10 minutes of SB to MVPA was associated with a lower waist circumference by 0.001% but if 10 minutes of MVPA is displaced by SB this was associated with a 0.84% higher waist circumference. The proportion of time spent in LIPA and SB were detrimentally associated with obesity and cardiovascular disease markers, but the association with SB was stronger. For diabetes risk markers, replacing SB with LIPA was associated with more favorable outcomes. Time spent in MVPA is an important target for intervention and preventing transfer of time from LIPA to SB might lessen the negative effects of physical inactivity. |
Modern Coding Theory | Iterative techniques have revolutionized the theory and practice of coding and have been adopted in the majority of next-generation communications standards. Modern Coding Theory summarizes the state of the art in iterative coding, with particular emphasis on the underlying theory. Starting with Gallager's original ensemble of low-density parity-check codes as a representative example, focus is placed on the techniques to analyze and design practical iterative coding systems. The basic concepts are then extended for several general codes, including the practically important class of turbo codes. This book takes advantage of the simplicity of the binary erasure channel to develop analytical techniques and intuition, which are then applied to general channel models. A chapter on factor graphs helps to unify the important topics of information theory, coding, and communication theory. Covering the most recent advances in the field, this book is a valuable resource for graduate students in electrical engineering and computer science, as well as practitioners who need to decide which coding scheme to employ, how to design a new scheme, or how to improve an existing system. Additional resources, including instructor's solutions and figures, are available online: isbn 978-0-521-85229-6 hardback Cambridge University Press has no responsibility for the persistence or accuracy of urls for external or third-party Internet Web sites referred to in this publication and does not guarantee that any content on such Web sites is, or will remain, accurate or appropriate. |
Assumptions of multiple regression: Correcting two misconceptions | In 2002, an article entitled “Four assumptions of multiple regression that researchers should always test” by Osborne and Waters was published in PARE. This article has gone on to be viewed more than 275,000 times (as of August 2013), and it is one of the first results displayed in a Google search for “regression assumptions”. While Osborne and Waters’ efforts in raising awareness of the need to check assumptions when using regression are laudable, we note that the original article contained at least two fairly important misconceptions about the assumptions of multiple regression: Firstly, that multiple regression requires the assumption of normally distributed variables; and secondly, that measurement errors necessarily cause underestimation of simple regression coefficients. In this article, we clarify that multiple regression models estimated using ordinary least squares require the assumption of normally distributed errors in order for trustworthy inferences, at least in small samples, but not the assumption of normally distributed response or predictor variables. Secondly, we point out that regression coefficients in simple regression models will be biased (toward zero) estimates of the relationships between variables of interest when measurement error is uncorrelated across those variables, but that when correlated measurement error is present, regression coefficients may be either upwardly or downwardly biased. We conclude with a brief corrected summary of the assumptions of multiple regression when using ordinary least squares. |
Extremely low frequency electromagnetic fields prevent chemotherapy induced myelotoxicity. | Side effects of chemo-radiotherapy reduce the quality and also the survivability of patients. The consequent fatigue and infections, related to myelodepression, act to reduce the dose-intensity of the protocol. Late side effects of chemo-radiotherapy include secondary tumours, acute myeloid leukemias and cardiotoxicity. Side effects of chemotherapy are related to oxidative stress produced by the treatment. Oxidative stress also reduces the efficacy of the treatment. Antioxidative treatment with natural (dietetic) or chemical agents has been reported to reduce the toxicity of chemo-radiotherapy and improve the efficacy of treatment. We here report our experience with SEQEX, an electromedical device that generates Extremely Low Frequency ElectroMagnetic Fields (ELF-EMF) to produce endogenic cyclotronic ionic resonance, to reduce myelotoxicity consequent to ABVD protocol in patients with Hodgkin's lymphoma. |
Overview of research on the educational use of video games | This paper overviews research on the educational use of video games by examining the viability of the different learning theories in the field, namely behaviorism, cognitivism, constructionism and the socio-cultural approach. In addition, five key tensions that emerge from the current research are examined: 1) Learning vs. playing, 2) freedom vs. control, 3) drill-and-practice games vs. microworlds, 4) transmission vs. construction, 5) teacher intervention vs. no teacher intervention. keywords education • video games • edutainment • learning • video games • overview • behaviorism • constructionism • cognitivism • socio-cultural • instructional technology • playing • teacher More than once we have heard that research on video games is an emerging field in which there has been no prior research, even though this is clearly not the case. Unfortunately, amnesia shackles too many researchers. In providing a comprehensive overview of the educational use of computer games, the paper contributes to the cure for this amnesia and highlights key tensions emerging from the current research that should be considered by practitioners and researchers alike. The method behind this overview The paper presents the most influential research on the subject based on an extensive search of the literature. The following database resources were used: Eric, Psych info, Medline, Ingenta, Emerald, ProQuest, Game Studies, Game-research.com, Simulation & Gaming and several others, as well as the most recent literature overviews, conference proceedings and websites of researchers. References found throughout these sources have been further expanded by examination of the references cited there. dk-2006-3.book Page 184 Wednesday, August 23, 2006 9:08 AM |
Adalimumab therapy in children with Crohn disease previously treated with infliximab. | OBJECTIVES
Adalimumab, a humanised anti-tumour necrosis factor antibody, is an effective treatment in adult patients with refractory Crohn disease (CD). The available literature on its efficacy in children remains limited. We aimed to evaluate the real-world efficacy in paediatric patients with CD and compare the efficacy between infliximab (IFX) nonresponders and patients who lost response to IFX.
METHODS
All Dutch patients with CD receiving adalimumab before the age of 18 years after previous IFX therapy were identified. We analysed longitudinal disease activity, assessed by the mathematically weighted Pediatric Crohn's Disease Activity Index (wPCDAI) or the physician global assessment (PGA), and adverse events (AEs).
RESULTS
Fifty-three patients with CD were included. Twelve patients received monotherapy and the others received combination treatment with thiopurines (n = 21), methotrexate (n = 11), steroids (n = 7), or exclusive enteral nutrition (n = 2). Median follow-up was 12 months (interquartile range 5-23). Remission was reached in 34 patients (64%, wPCDAI < 12.5 or PGA = 0) after a median of 3.3 months, and maintained by 50% for 2 years. Eleven patients (21%) reached response but not remission (decrease in wPCDAI ≥ 17.5 or decrease in PGA). Eighteen patients (34%) failed adalimumab treatment because of nonresponse (n = 4), lost response (n = 11), or AEs (n = 3). More IFX nonresponders failed adalimumab treatment than patients who lost response to IFX (2/3 vs 8/34, hazard ratio 18.8, 95% confidence interval 1.1-303.6). Only 1 patient encountered a serious AE, a severe but nonfatal infection.
CONCLUSIONS
In clinical practice, adalimumab induces remission in two-thirds of children with IFX refractory CD. |
Outcome of in vitro fertilization/intracytoplasmic sperm injection after laparoscopic cystectomy for endometriomas. | OBJECTIVE
To assess the impact of prior unilateral or bilateral endometrioma cystectomy on controlled ovarian hyperstimulation (COH) and intracytoplasmic sperm injection (ICSI) outcome.
DESIGN
Retrospective case-control study.
SETTING
Department of Obstetrics and Gynecology, School of Medicine, Hacettepe University, Ankara, Turkey.
PATIENT(S)
Fifty-seven consecutive infertile patients were enrolled who had previously undergone unilateral (n = 34) or bilateral (n = 23) laparoscopic cystectomy for endometriomas more than 3 cm in diameter and underwent ICSI. The control group consisted of 99 patients with tubal factor infertility.
INTERVENTION(S)
Controlled ovarian hyperstimulation and ICSI.
MAIN OUTCOME MEASURE(S)
Cycle cancellation rate, number of oocytes, fertilization rate, embryo quality, clinical pregnancy rate (PR), and implantation rate.
RESULT(S)
The mean number of oocytes, metaphase II oocytes, and two-pronucleated oocytes were significantly lower in the bilateral cystectomy group compared to the unilateral cystectomy and control groups. However, all other parameters, including fertilization rate, the mean number of embryos transferred, the mean number of grade 1 embryos transferred, the clinical PR per embryo transfer, and implantation rate, were comparable among the three groups. Within the unilateral cystectomy group, the mean number of oocyte retrieved from the operated site was significantly less than in the contralateral nonoperated site.
CONCLUSION(S)
Laparoscopic endometrioma cystectomy does reduce the ovarian reserve. However, diminished ovarian reserve does not translate into impaired pregnancy outcome. |
Research Guides: Intellectual Property Issues in Engineering: Getting Permissions | A resource for engineering graduate students at the University of Arkansas, intended to help them make informed decisions about copyright, patents, and other intellectual property matters associated with their studies |
Bacteriological quality of raw cow milk in Shahrekord, Iran. | Aim: The aim of this study was to evaluate the rate of contamination with Escherichia coli, coliforms, and Staphylococcus aureus in raw milk from Shahrekord city, Iran. Materials and Methods: In this study, 300 raw milk samples were collected randomly from five regions, namely northeast, east, southeast, south, and southwest regions of Shahrekord city according to stratified random sampling design. Samples were analyzed for Total plate count (TPC), Staphylococcus aureus, coliform, and E. coli. Results: Out of 300 samples of raw milk, contamination with coliforms, E. coli, and S. aureus was observed in 237 (79%), 207 (69%) and 125 (41.66%) samples, respectively. The highest rate of contamination was in the samples from southwest region with coliforms, E. coli, and S. aureus were present in 30 (100%), 29 (96.66%), and19 (63.33%) samples, respectively (p<0.05). Conclusions: Considering the high rate of raw milk contamination with S. aureus, E. Coli, and coliforms, sanitary practice during collecting, transporting, and storage especially in the summer season is recommended. |
Towards Modernizing the Future of American Voting | With the proliferation of computing and information technologies, we have an opportunity to envision a fully participatory democracy in the country through a fully digitized voting platform. However, the growing interconnectivity of systems and people across the globe, and the proliferation of cybersecurity issues pose a significant bottleneck towards achieving such a vision. In this paper, we discuss a vision to modernize our voting processes and discuss the challenges for creating a national e-voting framework that incorporates policies, standards and technological infrastructure that is secure, privacy-preserving, resilient and transparent. Through partnerships among private industry, academia, and State and Federal Government, technology must be the catalyst to develop a national platform for American voters. Along with integrating biometrics to authenticate each registered voter for transparency and accountability, the platform provides depth in the e-voting infrastructure with emerging blockchain technologies. We outline the way voting process runs today with the challenges; states are having from funding to software development concerns. Additionally, we highlight attacks from malware infiltrations from off the shelf products made from factories from countries such as China. This paper illustrates a strategic level of voting challenges and modernizing processes that will enhance the voter’s trust in America democracy. |
Cost-effectiveness of tuberculosis screening and isoniazid treatment in the TB/HIV in Rio (THRio) Study. | OBJECTIVE
To estimate the incremental cost-effectiveness of tuberculosis (TB) screening and isoniazid preventive therapy (IPT) among human immunodeficiency virus (HIV) infected adults in Rio de Janeiro, Brazil.
DESIGN
We used decision analysis, populated by data from a cluster-randomized trial, to project the costs (in 2010 USD) and effectiveness (in disability-adjusted life years [DALYs] averted) of training health care workers to implement the tuberculin skin test (TST), followed by IPT for TST-positive patients with no evidence of active TB. This intervention was compared to a baseline of usual care. We used time horizons of 1 year for the intervention and 20 years for disease outcomes, with all future DALYs and medical costs discounted at 3% per year.
RESULTS
Providing this intervention to 100 people would avert 1.14 discounted DALYs (1.57 undiscounted DALYs). The median estimated incremental cost-effectiveness ratio was $2273 (IQR $1779-$3135) per DALY averted, less than Brazil's 2010 per capita gross domestic product (GDP) of $11,700. Results were most sensitive to the cost of providing the training.
CONCLUSION
Training health care workers to screen HIV-infected adults with TST and provide IPT to those with latent tuberculous infection can be considered cost-effective relative to the Brazilian GDP per capita. |
Handling a trillion (unfixable) flaws on a billion devices: Rethinking network security for the Internet-of-Things | The Internet-of-Things (IoT) has quickly moved from the realm of hype to reality with estimates of over 25 billion devices deployed by 2020. While IoT has huge potential for societal impact, it comes with a number of key security challenges---IoT devices can become the entry points into critical infrastructures and can be exploited to leak sensitive information. Traditional host-centric security solutions in today's IT ecosystems (e.g., antivirus, software patches) are fundamentally at odds with the realities of IoT (e.g., poor vendor security practices and constrained hardware). We argue that the network will have to play a critical role in securing IoT deployments. However, the scale, diversity, cyberphysical coupling, and cross-device use cases inherent to IoT require us to rethink network security along three key dimensions: (1) abstractions for security policies; (2) mechanisms to learn attack and normal profiles; and (3) dynamic and context-aware enforcement capabilities. Our goal in this paper is to highlight these challenges and sketch a roadmap to avoid this impending security disaster. |
State-of-the-art Web Applications using Microservices and Linked Data | This paper describes the current state of mu.semte.ch, a platform for building state-of-the-art web applications fuelled by Linked Data aware microservices. The platform assumes a mashup-like construction of single page web applications which consume various services. In order to reuse tooling built in the community, Linked Data is not pushed to the frontend. |
Temporal Analytics on Big Data for Web Advertising | "Big Data" in map-reduce (M-R) clusters is often fundamentally temporal in nature, as are many analytics tasks over such data. For instance, display advertising uses Behavioral Targeting (BT) to select ads for users based on prior searches, page views, etc. Previous work on BT has focused on techniques that scale well for offline data using M-R. However, this approach has limitations for BT-style applications that deal with temporal data: (1) many queries are temporal and not easily expressible in M-R, and moreover, the set-oriented nature of M-R front-ends such as SCOPE is not suitable for temporal processing, (2) as commercial systems mature, they may need to also directly analyze and react to real-time data feeds since a high turnaround time can result in missed opportunities, but it is difficult for current solutions to naturally also operate over real-time streams. Our contributions are twofold. First, we propose a novel framework called TiMR (pronounced timer), that combines a time-oriented data processing system with a M-R framework. Users write and submit analysis algorithms as temporal queries - these queries are succinct, scale-out-agnostic, and easy to write. They scale well on large-scale offline data using TiMR, and can work unmodified over real-time streams. We also propose new cost-based query fragmentation and temporal partitioning schemes for improving efficiency with TiMR. Second, we show the feasibility of this approach for BT, with new temporal algorithms that exploit new targeting opportunities. Experiments using real data from a commercial ad platform show that TiMR is very efficient and incurs orders-of-magnitude lower development effort. Our BT solution is easy and succinct, and performs up to several times better than current schemes in terms of memory, learning time, and click-through-rate/coverage. |
Smart Tourism Destinations Enhancing Tourism Experience Through Personalisation of Services | Bringing smartness into tourism destinations requires dynamically interconnecting stakeholders through a technological platform on which information relating to tourism activities could be exchanged instantly. Instant information exchange has also created extremely large data sets known as Big Data that may be analysed computationally to reveal patterns and trends. Smart Tourism Destinations should make an optimal use of Big Data by offering right services that suit users’ preference at the right time. In relation thereto, this paper aims at contributing to the understanding on how Smart Tourism Destinations could potentially enhance tourism experience through offering products/services that are more personalised to meet each of visitor’s unique needs and preferences. Understanding the needs, wishes and desires of travellers becomes increasingly critical for the competitiveness of destinations. Therefore, the findings in the present research are insightful for number of tourism destinations. |
Distributed Machine Learning via Sufficient Factor Broadcasting | Matrix-parametrized models, including multiclass logistic regression and sparse coding, are used in machine learning (ML) applications ranging from computer vision to computational biology. When these models are applied to largescale ML problems starting at millions of samples and tens of thousands of classes, their parameter matrix can grow at an unexpected rate, resulting in high parameter synchronization costs that greatly slow down distributed learning. To address this issue, we propose a Sufficient Factor Broadcasting (SFB) computation model for efficient distributed learning of a large family of matrix-parameterized models, which share the following property: the parameter update computed on each data sample is a rank-1 matrix, i.e. the outer product of two “sufficient factors” (SFs). By broadcasting the SFs among worker machines and reconstructing the update matrices locally at each worker, SFB improves communication efficiency — communication costs are linear in the parameter matrix’s dimensions, rather than quadratic — without affecting computational correctness. We present a theoretical convergence analysis of SFB, and empirically corroborate its efficiency on four different matrix-parametrized ML models. |
Emerging Applications of Interferometric Synthetic Aperture Radar (InSAR) in Geomorphology and Hydrology | Interferometric synthetic aperture radar (InSAR) is a powerful geodetic tool used to construct digital elevation models of the earth’s topography and to image centimeter-scale displacements associated with crustal deformation and the flow of ice sheets. The past decade has seen significant improvements in our understanding of earthquakes, volcanoes, and glaciers as a direct result of this technology. Geomorphology and hydrology can also benefit from InSAR. A small but growing body of work shows that it is possible to interferometrically measure centimeter-scale motions associated with unstable slopes, land subsidence, fluctuating soil moisture and water levels, and deforming river-ice cover. Maps of interferometric correlation provide information about the structure of snow and reveal areas disturbed by erosion, sedimentation, flooding, snowfall, and aufeis growth. At present, such techniques are underdeveloped and largely overlooked by the geographic and radar communities. This article reviews these emerging geomorphic and hydrologic applications of InSAR and presents a first demonstration of motion detection for river ice. |
Beneficial metabolic effects of the Malaysian herb Labisia pumila var. alata in a rat model of polycystic ovary syndrome. | AIM OF THE STUDY
New options are needed to prevent and treat metabolic disorders associated with polycystic ovary syndrome (PCOS). Labisia pumila var. alata (LPva)-a Malaysian herb thought to have phytoestrogenic effects-has shown promise in reducing body weight gain in ovariectomized rats. In this study, we investigated the effect of LPva on body composition and metabolic features in female rats treated continuously with dihydrotestosterone, starting before puberty, to induce PCOS.
MATERIAL AND METHODS
At 9 weeks of age, the PCOS rats were randomly subdivided into two groups; PCOS LPva and PCOS control. PCOS LPva rats received a daily oral dose of LPva (50mg/kg body weight), dissolved in 1 ml of deionised water, for 4-5 weeks. PCOS controls received 1 ml of deionised water on the same schedule.
RESULTS
LPva increased uterine weight (27%) and insulin sensitivity (36%) measured by euglycemic hyperinsulinemic clamp. Plasma resistin levels were increased and lipid profile was improved in LPva rats. In adipose tissue, LPva decreased leptin mRNA expression but did not affect expression of resistin and adiponectin. No effects on body composition, adipocyte size, or plasma leptin levels were observed.
CONCLUSION
LPva increases uterine weight, indicating estrogenic effects, and improves insulin sensitivity and lipid profile in PCOS rats without affecting body composition. |
Study the grain boundary triple junction segregation of phosphorus in a nickel-base alloy using energy dispersive X-ray spectroscopy on a transmission electron microscope | Abstract Segregation of phosphorus at grain boundary triple junctions in a nickel-base alloy was measured using energy dispersive X-ray spectroscopy on a transmission electron microscope. It was found that the triple junction segregation of phosphorus varies with grain boundaries in that the concentration of phosphorus at triple junction can be higher or lower than that at the adjacent grain boundaries. A concentration gradient was detected along some grain boundaries within a distance of 5–100 nm from the associated triple junctions. The results are discussed with regards to various structural and atomistic diffusion characteristics. |
Visual and tactile information about object-curvature control fingertip forces and grasp kinematics in human dexterous manipulation. | Most objects that we manipulate have curved surfaces. We have analyzed how subjects during a prototypical manipulatory task use visual and tactile sensory information for adapting fingertip actions to changes in object curvature. Subjects grasped an elongated object at one end using a precision grip and lifted it while instructed to keep it level. The principal load of the grasp was tangential torque due to the location of the center of mass of the object in relation to the horizontal grip axis joining the centers of the opposing grasp surfaces. The curvature strongly influenced the grip forces required to prevent rotational slips. Likewise the curvature influenced the rotational yield of the grasp that developed under the tangential torque load due to the viscoelastic properties of the fingertip pulps. Subjects scaled the grip forces parametrically with object curvature for grasp stability. Moreover in a curvature-dependent manner, subjects twisted the grasp around the grip axis by a radial flexion of the wrist to keep the desired object orientation despite the rotational yield. To adapt these fingertip actions to object curvature, subjects could use both vision and tactile sensibility integrated with predictive control. During combined blindfolding and digital anesthesia, however, the motor output failed to predict the consequences of the prevailing curvature. Subjects used vision to identify the curvature for efficient feedforward retrieval of grip force requirements before executing the motor commands. Digital anesthesia caused little impairment of grip force control when subjects had vision available, but the adaptation of the twist became delayed. Visual cues about the form of the grasp surface obtained before contact was used to scale the grip force, whereas the scaling of the twist depended on visual cues related to object movement. Thus subjects apparently relied on different visuomotor mechanisms for adaptation of grip force and grasp kinematics. In contrast, blindfolded subjects used tactile cues about the prevailing curvature obtained after contact with the object for feedforward adaptation of both grip force and twist. We conclude that humans use both vision and tactile sensibility for feedforward parametric adaptation of grip forces and grasp kinematics to object curvature. Normal control of the twist action, however, requires digital afferent input, and different visuomotor mechanisms support the control of the grasp twist and the grip force. This differential use of vision may have a bearing to the two-stream model of human visual processing. |
Application of Machine Learning in Wireless Networks: Key Techniques and Open Issues | As a key technique for enabling artificial intelligence, machine learning (ML) has been shown to be capable of solving complex problems without explicit programming. Motivated by its successful applications to many practical tasks like image recognition and recommendation systems, both industry and the research community have advocated the applications of ML in wireless communication. This paper comprehensively surveys the recent advances of the applications of ML in wireless communication, which are classified as: resource management in the MAC layer, networking and mobility management in the network layer, and localization in the application layer. The applications in resource management further include power control, spectrum management, backhaul management, cache management, beamformer design, and computation resource management, while ML-based networking focuses on the applications in base station (BS) clustering, BS switching control, user association, and routing. Each aspect is further categorized according to the adopted ML techniques. Additionally, given the extensiveness of the research area, challenges and unresolved issues are presented to facilitate future studies, where the topics of ML-based network slicing, infrastructure update to support MLbased paradigms, open data sets and platforms for researchers, theoretical guidance for ML implementation, and so on are discussed. |
Social power and information technology implementation: a contentious framing lens | Research on the organizational implementation of information technology (IT) and social power has favoured explanations based on issues of resource power and process power at the expense of matters of meaning power. As a result, although the existence and importance of meaning power is acknowledged, its distinctive practices and enacted outcomes remain relatively under-theorized and under-explored by IT researchers. This paper focused on unpacking the practices and outcomes associated with the exercise of meaning power within the IT implementation process. Our aim was to analyze the practices employed to construct meaning and enact a collective ‘definition of the situation’. We focused on framing and utilizing the signature matrix technique to represent and analyze the exercise of meaning power in practice. The paper developed and illustrated this conceptual framework using a case study of a conflictual IT implementation in a challenging public sector environment. We concluded by pointing out the situated nature of meaning power practices and the enacted outcomes. Our research extends the literature on IT and social power by offering an analytical framework distinctly suited to the analysis and deeper understanding of the meaning power properties. |
Transparent Fingerprint Sensor System for Large Flat Panel Display | In this paper, we introduce a transparent fingerprint sensing system using a thin film transistor (TFT) sensor panel, based on a self-capacitive sensing scheme. An armorphousindium gallium zinc oxide (a-IGZO) TFT sensor array and associated custom Read-Out IC (ROIC) are implemented for the system. The sensor panel has a 200 × 200 pixel array and each pixel size is as small as 50 μm × 50 μm. The ROIC uses only eight analog front-end (AFE) amplifier stages along with a successive approximation analog-to-digital converter (SAR ADC). To get the fingerprint image data from the sensor array, the ROIC senses a capacitance, which is formed by a cover glass material between a human finger and an electrode of each pixel of the sensor array. Three methods are reviewed for estimating the self-capacitance. The measurement result demonstrates that the transparent fingerprint sensor system has an ability to differentiate a human finger's ridges and valleys through the fingerprint sensor array. |
Privacy-Preserving Image Denoising From External Cloud Databases | Along with the rapid advancement of digital image processing technology, image denoising remains a fundamental task, which aims to recover the original image from its noisy observation. With the explosive growth of images on the Internet, one recent trend is to seek high quality similar patches at cloud image databases and harness rich redundancy therein for promising denoising performance. Despite the well-understood benefits, such a cloud-based denoising paradigm would undesirably raise security and privacy issues, especially for privacy-sensitive image data sets. In this paper, we initiate the first endeavor toward privacy-preserving image denoising from external cloud databases. Our design enables the cloud hosting encrypted databases to provide secure query-based image denoising services. Considering that image denoising intrinsically demands high quality similar image patches, our design builds upon recent advancements on secure similarity search, Yao’s garbled circuits, and image denoising operations, where each is used at a different phase of the design for the best performance. We formally analyze the security strengths. Extensive experiments over real-world data sets demonstrate that our design achieves the denoising quality close to the optimal performance in plaintext. |
How youth get engaged: grounded-theory research on motivational development in organized youth programs. | For youth to benefit from many of the developmental opportunities provided by organized programs, they need to not only attend but become psychologically engaged in program activities. This research was aimed at formulating empirically based grounded theory on the processes through which this engagement develops. Longitudinal interviews were conducted with 100 ethnically diverse youth (ages 14–21) in 10 urban and rural arts and leadership programs. Qualitative analysis focused on narrative accounts from the 44 youth who reported experiencing a positive turning point in their motivation or engagement. For 38 of these youth, this change process involved forming a personal connection. Similar to processes suggested by self-determination theory (Ryan & Deci, 2000), forming a personal connection involved youth's progressive integration of personal goals with the goals of program activities. Youth reported developing a connection to 3 personal goals that linked the self with the activity: learning for the future, developing competence, and pursuing a purpose. The role of purpose for many youth suggests that motivational change can be driven by goals that transcend self-needs. These findings suggest that youth need not enter programs intrinsically engaged--motivation can be fostered--and that programs should be creative in helping youth explore ways to form authentic connections to program activities. |
A Review on Massive E-Learning (MOOC) Design, Delivery and Assessment | MOOCs or Massive Online Open Courses based on Open Educational Resources (OER) might be one of the most versatile ways to offer access to quality education, especially for those residing in far or disadvantaged areas. This article analyzes the state of the art on MOOCs, exploring open research questions and setting interesting topics and goals for further research. Finally, it proposes a framework that includes the use of software agents with the aim to improve and personalize management, delivery, efficiency and evaluation of massive online courses on an individual level basis. |
Agree or Cancel ? Research and Terms of Service Compliance | Christian Sandvig University of Michigan Room 5385 North Quad 105 South State Street Ann Arbor, MI 48109 [email protected] Abstract We address the ACM Code of Ethics and discuss the stipulation that researchers follow terms of service. While the reasons for following terms of service are clear, we argue that there are hidden costs. Using the example of research into algorithm awareness and algorithm transparency, we argue that for some research problems the benefits to society outweigh the harm of violating terms of service. We draw attention to current strategies that researchers use to adapt to the ACM prohibition on research violating terms of service and some results of those approaches. We conclude with recommendations for determining researchers’ current understanding of the ACM Code of Ethics and terms of service restrictions and updating the existing guidelines. |
Bitonic Sorting Algorithm: A Review | The Batcher`s bitonic sorting algorithm is a parallel sorting algorithm, which is used for sorting the numbers in modern parallel machines. There are various parallel sorting algorithms such as radix sort, bitonic sort, etc. It is one of the efficient parallel sorting algorithm because of load balancing property. It is widely used in various scientific and engineering applications. However, Various researches have worked on a bitonic sorting algorithm in order to improve up the performance of original batcher`s bitonic sorting algorithm. In this paper, tried to review the contribution made by these researchers. |
Improving test case generation for web applications using automated interface discovery | With the growing complexity of web applications, identifying web interfaces that can be used for testing such applications has become increasingly challenging. Many techniques that work effectively when applied to simple web applications are insufficient when used on modern, dynamic web applications, and may ultimately result in inadequate testing of the applications' functionality. To address this issue, we present a technique for automatically discovering web application interfaces based on a novel static analysis algorithm. We also report the results of an empirical evaluation in which we compare our technique against a traditional approach. The results of the comparison show that our technique can (1) discover a higher number of interfaces and (2) help generate test inputs that achieve higher coverage. |
Does Transparency Stifle or Facilitate Innovation? | Corporate transparency reduces information asymmetries between firms and capital markets, but increases the costs associated with information leakage to competitors. We evaluate the net effects of a country’s overall information environment by focusing on innovation, an economically important activity characterized by both high information asymmetries and potentially severe proprietary costs. Focusing on cross-country differences in the availability of firm-specific information to corporate outsiders, we find significantly higher rates of R&D investment in richer information environments. The information environment disproportionally affects R&D activity in sectors more reliant on external finance, indicating that transparency facilitates innovation by reducing information costs. We draw similar inferences by studying quasi-experimental shocks to the information environment arising from the first prosecution of insider trading and introduction of transparency-specific securities regulations in the European Union. Our work directly connects an economy’s transparency environment with the innovative activities that drive growth, and highlights unexplored economic consequences of transparency-related security market reforms. |
Personality and intelligence as predictors of academic achievement : A cross-sectional study from elementary to secondary school | General intelligence and personality traits from the Five-Factor model were studied as predictors of academic achievement in a large sample of Estonian schoolchildren from elementary to secondary school. A total of 3618 students (1746 boys and 1872 girls) from all over Estonia attending Grades 2, 3, 4, 6, 8, 10, and 12 participated in this study. Intelligence, as measured by the Raven’s Standard Progressive Matrices, was found to be the best predictor of students’ grade point average (GPA) in all grades. Among personality traits (measured by self-reports on the Estonian Big Five Questionnaire for Children in Grades 2 to 4 and by the NEO Five Factor Inventory in Grades 6 to 12), Openness, Agreeableness, and Conscientiousness correlated positively and Neuroticism correlated negatively with GPA in almost every grade. When all measured variables were entered together into a regression model, intelligence was still the strongest predictor of GPA, being followed by Agreeableness in Grades 2 to 4 and Conscientiousness in Grades 6 to 12. Interactions between predictor variables and age accounted for only a small percentage of variance in GPA, suggesting that academic achievement relies basically on the same mechanisms through the school years. 2006 Elsevier Ltd. All rights reserved. |
Peak-to-Average Power Ratio Reduction of OFDM Signals With Nonlinear Companding Scheme | Companding transform is a simple and efficient method in reducing the Peak-to-Average Power Ratio (PAPR) of Orthogonal Frequency Division Multiplexing (OFDM) systems. In this paper, a novel nonlinear companding scheme is proposed to reduce the PAPR and improve Bit Error Rate (BER) for OFDM systems. This proposed scheme mainly focuses on compressing the large signals, while maintaining the average power constant by properly choosing transform parameters. Moreover, analysis shows that the proposed scheme without de-companding at the receiver can also offer a good BER performance. Finally, simulation results show that the proposed scheme outperforms other companding scheme in terms of spectrum side-lobes, PAPR reduction and BER performance. |
Education externalities in rural Ethiopia: evidence from average and stochastic frontier production functions | Education will have externality effects in agriculture if, in the course of conducting their own private economic activities, educated farmers raise the productivity of uneducated farmers with whom they come into contact. This paper seeks to determine the potential size and source of such benefits for rural areas of Ethiopia. Average and stochastic frontier production function methodologies are employed to measure productivity and efficiency of farmers. In each case, internal and external returns to schooling are compared. We find that there are substantial and significant externality benefits of education in terms of higher average farm output and a shifting outwards of the production frontier. External benefits of schooling may be several times as high as internal benefits in this regard. However, we are unable to find any evidence of externality benefits to schooling in terms of improvements in technological efficiency in the use of a given technology. This suggests that the source of externalities to schooling is in the adoption and spread of innovations, which shift out the production frontier. |
A Wirelessly Powered Smart Contact Lens with Reconfigurable Wide Range and Tunable Sensitivity Sensor Readout Circuitry | This study presented a wireless smart contact lens system that was composed of a reconfigurable capacitive sensor interface circuitry and wirelessly powered radio-frequency identification (RFID) addressable system for sensor control and data communication. In order to improve compliance and reduce user discomfort, a capacitive sensor was embedded on a soft contact lens of 200 μm thickness using commercially available bio-compatible lens material and a standard manufacturing process. The results indicated that the reconfigurable sensor interface achieved sensitivity and baseline tuning up to 120 pF while consuming only 110 μW power. The range and sensitivity tuning of the readout circuitry ensured a reliable operation with respect to sensor fabrication variations and independent calibration of the sensor baseline for individuals. The on-chip voltage scaling allowed the further extension of the detection range and prevented the implementation of large on-chip elements. The on-lens system enabled the detection of capacitive variation caused by pressure changes in the range of 2.25 to 30 mmHg and hydration level variation from a distance of 1 cm using incident power from an RFID reader at 26.5 dBm. |
Individual differences in spatial text processing : High spatial ability can compensate for spatial working memory interference | a r t i c l e i n f o The present study investigates the relation between spatial ability and visuo-spatial and verbal working memory in spatial text processing. In two experiments, participants listened to a spatial text (Experiments 1 and 2) and a non-spatial text (Experiment 1), at the same time performing a spatial or a verbal concurrent task, or no secondary task. To understand how individuals who differ in spatial ability process spatial text during dual task performance, spatial individual differences were analyzed. The tasks administered were the Vandenberg and Kuse [Vandenberg, S. G., & Kuse, A. R. (1978). Mental rotation, a group test of three-dimensional spatial visualization. Perceptual and Motor Skills, 47, 599-604.] mental rotation test (MRT) and a reading comprehension task (RCT). Individuals with high (HMR) and low (LMR) mental rotation differed in MRT scores but had similar RCT performance. Results showed that the HMR group, in contrast with LMR counterparts, preserved good spatial text recall even when a spatial concurrent task was performed; however, Experiment 2 revealed a modification of spatial concurrent task performance in LMR as well in HMR group. Overall, results suggest that HMR individuals have more spatial resources than LMR individuals, allowing them to compensate for spatial working memory interference, but only to a limited extent, given that the processing of spatial information is still mediated by VSWM. The present research investigates how spatial ability modulates the processing of environmental information acquired through spatial description. This represents a very common real-life experience in which the environment is indirectly acquired. The spatial mental representation emerging from processing descriptions is a mental model (Johnson-Laird, 1983). Several studies have demonstrated that spatial mental models reflect the spatial properties of descriptions, maintaining relations between objects (e. One way to approach the question of the characteristics of spatial mental representation is to analyze individual differences. A number of studies have noted the importance of visuo-spatial abilities in spatial mental representation also derived from spatial description (see Section 1.1). More recently, the role of some cognitive aspects such as working memory (WM) in spatial descriptions processing was considered (see Section 1.2). The role of WM in spatial text processing is analyzed according to the Baddeley model (Baddeley, 1986; Baddeley & Hitch, 1974). This is a theoretical framework where visuo-spatial abilities can be engaged through a specific role. For this reason, the approach in the present research is … |
Digital color imaging | This paper surveys current technology and research in the area of digital color imaging. In order to establish the background and lay down terminology, fundamental concepts of color perception and measurement are first presented using vector-space notation and terminology. Present-day color recording and reproduction systems are reviewed along with the common mathematical models used for representing these devices. Algorithms for processing color images for display and communication are surveyed, and a forecast of research trends is attempted. An extensive bibliography is provided. |
Radiation attenuation by lead and nonlead materials used in radiation shielding garments. | The attenuating properties of several types of lead (Pb)-based and non-Pb radiation shielding materials were studied and a correlation was made of radiation attenuation, materials properties, calculated spectra and ambient dose equivalent. Utilizing the well-characterized x-ray and gamma ray beams at the National Research Council of Canada, air kerma measurements were used to compare a variety of commercial and pre-commercial radiation shielding materials over mean energy ranges from 39 to 205 keV. The EGSnrc Monte Carlo user code cavity. cpp was extended to provide computed spectra for a variety of elements that have been used as a replacement for Pb in radiation shielding garments. Computed air kerma values were compared with experimental values and with the SRS-30 catalogue of diagnostic spectra available through the Institute of Physics and Engineering in Medicine Report 78. In addition to garment materials, measurements also included pure Pb sheets, allowing direct comparisons to the common industry standards of 0.25 and 0.5 mm "lead equivalent." The parameter "lead equivalent" is misleading, since photon attenuation properties for all materials (including Pb) vary significantly over the energy spectrum, with the largest variations occurring in the diagnostic imaging range. Furthermore, air kerma measurements are typically made to determine attenuation properties without reference to the measures of biological damage such as ambient dose equivalent, which also vary significantly with air kerma over the diagnostic imaging energy range. A single material or combination cannot provide optimum shielding for all energy ranges. However, appropriate choice of materials for a particular energy range can offer significantly improved shielding per unit mass over traditional Pb-based materials. |
Geometric distortion signatures for printer identification | We present a forensic technique for analyzing a printed image in order to trace the originating printer. Our method, which is applicable for commonly used electrophotographic (EP) printers, operates by exploiting the geometric distortion that these devices inevitably introduce in the printing process. In the proposed method, first a geometric distortion signature is estimated for an EP printer. This estimate is obtained using only the images printed on the printer and without access to the internal printer controls. Once a database of printer signatures is available, the printer utilized to print a test image is identified by computing the geometric distortion signature from test image and correlating the computes signatures against the printer signatures in the database. Experiments conducted over a corpus of EP printers demonstrate that the geometric distortion signatures of test documents exhibit high correlation with the corresponding printer signatures and a low correlation with other printer signatures. The method is therefore quite promising for forensic printer identification applications. We highlight several of the capabilities and challenges for the method. |
Comparison of JSON and XML Data Interchange Formats: A Case Study | This paper compares two data interchange format s currently used by industry applications; XML and JSON. The choice of an adequate data interchange format can have significant consequences on data transmission rates and performance. We describe th e language specifications and their respective settin g of use. A case study is then conducted to compare the resource utilization and the relative performance o f applications that use the interchange formats. We find that JSON is significantly faster than XML and we further record other resource-related metrics in ou r results. |
A dictionary of communication and media studies | This dictionary has become a unique resource for students and teachers in the field of communication and media studies. It brings together a large amount of essential information regarding current media policy and practice, and also provides clear and lively explanations of the many concepts and models which make up communication theory. In its third edition, the dictionary has been brought completely up-to-date, with over 100 new definitions reflecting the latest theories, research findings and developments in media practice. The new entries include information on recent, far-reaching developments in broadcasting and consider the increasing influence of transnational corporations over cultural discourse and activity. There is also expanded coverage of important issues such as feminism, racism and sexism. |
Exploring Vector Spaces for Semantic Relations | Word embeddings are used with success for a variety of tasks involving lexical semantic similarities between individual words. Using unsupervised methods and just cosine similarity, encouraging results were obtained for analogical similarities. In this paper, we explore the potential of pre-trained word embeddings to identify generic types of semantic relations in an unsupervised experiment. We propose a new relational similarity measure based on the combination of word2vec’s CBOW input and output vectors which outperforms alternative vector representations, when used for unsupervised clustering on SemEval 2010 Relation Classification data. |
Query-driven soft traceability links for models | Model repositories play a central role in the model driven development of complex software-intensive systems by offering means to persist and manipulate models obtained from heterogeneous languages and tools. Complex models can be assembled by interconnecting model fragments by hard links, i.e., regular references, where the target end points to external resources using storage-specific identifiers. This approach, in certain application scenarios, may prove to be a too rigid and error prone way of interlinking models. As a flexible alternative, we propose to combine derived features with advanced incremental model queries as means for soft interlinking of model elements residing in different model resources. These soft links can be calculated on-demand with graceful handling for temporarily unresolved references. In the background, the links are maintained efficiently and flexibly by using incremental model query evaluation. The approach is applicable to modeling environments or even property graphs for representing query results as first-class relations, which also allows the chaining of soft links that is useful for modular applications. The approach is evaluated using the Eclipse Modeling Framework (EMF) and EMF-IncQuery in two complex industrial case studies. The first case study is motivated by a knowledge management project from the financial domain, involving a complex interlinked structure of concept and business process models. The second case study is set in the avionics domain with strict traceability requirements enforced by certification standards (DO-178b). It consists of multiple domain models describing the allocation scenario of software functions to hardware components. |
From perception to pleasure: music and its neural substrates. | Music has existed in human societies since prehistory, perhaps because it allows expression and regulation of emotion and evokes pleasure. In this review, we present findings from cognitive neuroscience that bear on the question of how we get from perception of sound patterns to pleasurable responses. First, we identify some of the auditory cortical circuits that are responsible for encoding and storing tonal patterns and discuss evidence that cortical loops between auditory and frontal cortices are important for maintaining musical information in working memory and for the recognition of structural regularities in musical patterns, which then lead to expectancies. Second, we review evidence concerning the mesolimbic striatal system and its involvement in reward, motivation, and pleasure in other domains. Recent data indicate that this dopaminergic system mediates pleasure associated with music; specifically, reward value for music can be coded by activity levels in the nucleus accumbens, whose functional connectivity with auditory and frontal areas increases as a function of increasing musical reward. We propose that pleasure in music arises from interactions between cortical loops that enable predictions and expectancies to emerge from sound patterns and subcortical systems responsible for reward and valuation. |
Downsampling for sparse subspace clustering | Sparse subspace clustering (SSC) is a technique to partition unlabeled samples according to the subspaces they locate in. With the rapid increase of data amount, efficiently downsampling a big dataset, while at the same time keeping the structure of subspaces, becomes an important topic for SSC. In order to reduce the computational cost while preserving clustering accuracy, a new approach of SSC with downsampling (SSCD) is proposed in this paper. In SSCD, the numbers of samples located in respective subspaces are estimated utilizing the ℓ1 norm of the sparse representation. Then a downsampling strategy is designed to decimate samples with the probabilities that are in reverse ratio to the amounts of samples in respective subspaces. As a consequence, the samples in different subspaces are expected to be balanced after the downsampling operation. Theoretical analysis proves the correctness of the proposed strategy. Numerical simulations also verify the efficiency of SSCD. |
Space-Vector PWM Control Synthesis for an H-Bridge Drive in Electric Vehicles | This paper deals with a synthesis of space-vector pulsewidth modulation (SVPWM) control methods applied for an H-bridge inverter feeding a three-phase permanent-magnet synchronous machine (PMSM) in electric-vehicle (EV) applications. First, a short survey of existing architectures of power converters, particularly those adapted to degraded operating modes, is presented. Standard SVPWM control methods are compared with three innovative methods using EV drive specifications in the normal operating mode. Then, a rigorous analysis of the margins left in the control strategy is presented for a semiconductor switch failure to fulfill degraded operating modes. Finally, both classic and innovative strategies are implemented in numerical simulation; their results are analyzed and discussed. |
CREATIVE CITIES: THE CULTURAL INDUSTRIES AND THE CREATIVE CLASS | The aim of this paper is to critically examine the notion that the creative class may or may not play as a causal mechanism of urban regeneration. The paper begins with a review of Florida’s argument focusing on the conceptual and theoretical underpinnings. The second section develops a critique of the relationship between the creative class and growth. This is followed by an attempt to clarify the relationship between the concepts of creativity, culture and the creative industries. Finally, the paper suggests that policy makers may achieve more successful regeneration outcomes if they attend to the cultural industries as an object that links production and consumption, manufacturing and service. Such a notion is more useful in interpreting and understanding the significant role of cultural production in contemporary cities, and what relation it has to growth. |
Molecular definitions of autophagy and related processes. | Over the past two decades, the molecular machinery that underlies autophagic responses has been characterized with ever increasing precision in multiple model organisms. Moreover, it has become clear that autophagy and autophagy-related processes have profound implications for human pathophysiology. However, considerable confusion persists about the use of appropriate terms to indicate specific types of autophagy and some components of the autophagy machinery, which may have detrimental effects on the expansion of the field. Driven by the overt recognition of such a potential obstacle, a panel of leading experts in the field attempts here to define several autophagy-related terms based on specific biochemical features. The ultimate objective of this collaborative exchange is to formulate recommendations that facilitate the dissemination of knowledge within and outside the field of autophagy research. |
Extreme gigantomastia in pregnancy: case report—my experience with two cases in last 5 years | We present an extreme case of gigantomastia in pregnancy during the second gemelar pregnancy of a 30-year-old woman. Her first pregnancy was 8 years ago, was also gemelar and she delivered with caesarean section. From the beginning of her current pregnancy, the patient noted steady growth of both of her breasts that reached enormous dimensions at the end of the pregnancy. This kind of breast changes did not occur during her first pregnancy. The patient also suffered from myasthenia gravis that was in remission during this pregnancy, without any therapy. The patient was in the 38 weeks of gestation, and a delivery with caesarean section was performed in line with the reduction of her breasts. The main reasons that led me to perform these two interventions as one act were the fact that puerperal mastitis could develop on these enormous breasts, further the small regression of these huge breasts during the bromocriptine treatment, as well as the intention to avoid other operative traumas, considering possibility of exacerbation of myasthenia gravis. I had already performed bilateral reduction mammaplasty with free areola-nipple graft, when a tissue with total weight of 20 kg (2 × 10 kg) was removed. The patient had an excellent post-operation recovery course. |
Immobilization of antibodies on alginate-chitosan beads. | An anti-hapten IgG was covalently immobilized on glutaraldehyde-activated alginate-chitosan gel beads. The antibody immobilization efficiency was influenced by glutaraldehyde-bead reaction time, IgG concentration and pH. In addition, immobilization conditions such as glutaraldehyde and antibody concentrations influenced antibody hapten binding affinity. The immobilized IgG on the beads was stable and no reduction in the percent binding to hapten was noticed following 25 days of storage. It was concluded that antibodies could be successfully immobilized on alginate-chitosan gel beads. Such a system can be applied for the development of immunoaffinity purification and immunoassays. |
Exceptional symmetric domains | We give the presentation of exceptional bounded symmetric domains using the Albert algebra and exceptional Jordan triple systems.
The first chapter is devoted to Cayley-Graves algebras, the second to exceptional Jordan triple systems. In the third chapter, we give a geometric description of the two exceptional bounded symmetric domains, their boundaries and their compactification. |
How to get mature global virtual teams: a framework to improve team process management in distributed software teams | Managing global software development teams is not an easy task because of the additional problems and complexities that have to be taken into account. This paper defines VTManager, a methodology that provides a set of efficient practices for global virtual team management in software development projects. These practices integrate software development techniques in global environments with others such as explicit practices for global virtual team management, definition of skills and abilities needed to work in these teams, availability of collaborative work environments and shared knowledge management practices. The results obtained and the lessons learned from implementing VTManager in a pilot project to develop software tools for collaborative work in rural environments are also presented. This project was carried out by geographically distributed teams involving people from seven countries with a high level of virtualness. |
On Quality Control and Machine Learning in Crowdsourcing | The advent of crowdsourcing has created a variety of new opportunities for improving upon traditional methods of data collection and annotation. This in turn has created intriguing new opportunities for data-driven machine learning (ML). Convenient access to crowd workers for simple data collection has further generalized to leveraging more arbitrary crowd-based human computation (von Ahn 2005) to supplement automated ML. While new potential applications of crowdsourcing continue to emerge, a variety of practical and sometimes unexpected obstacles have already limited the degree to which its promised potential can be actually realized in practice. This paper considers two particular aspects of crowdsourcing and their interplay, data quality control (QC) and ML, reflecting on where we have been, where we are, and where we might go from here. |
Finding representative set from massive data | In the information age, data is pervasive. In some applications, data explosion is a significant phenomenon. The massive data volume poses challenges to both human users and computers. In this project, we propose a new model for identifying representative set from a large database. A representative set is a special subset of the original dataset, which has three main characteristics: It is significantly smaller in size compared to the original dataset. It captures the most information from the original dataset compared to other subsets of the same size. It has low redundancy among the representatives it contains. We use information-theoretic measures such as mutual information and relative entropy to measure the representativeness of the representative set. We first design a greedy algorithm and then present a heuristic algorithm that delivers much better performance. We run experiments on two real datasets and evaluate the effectiveness of our representative set in terms of coverage and accuracy. The experiments show that our representative set attains expected characteristics and captures information more efficiently. |
Dynamic analysis and control of a zone-control induction heating system | This paper presents a quick and accurate power control method for a zone-control induction heating (ZCIH) system. The ZCIH system consists of multiple working coils connected to multiple H-bridge inverters. The system controls the amplitude and phase angle of each coil current to make the temperature distribution on the workpiece uniform. This paper proposes a new control method for the coil currents based on a circuit model using real and imaginary (Re-Im) current/voltage components. The method detects and controls the Re-Im components of the coil current instead of the current amplitude and phase angle. As a result, the proposed method enables decoupling control for the system, making the control for each working coil independent from the others. Experiments on a 6-zone ZCIH laboratory setup are conducted to verify the validity of the proposed method. It is clarified that the proposed method has a stable operation both in transient and steady states. The proposed system and control method enable system complexity reduction and control stability improvements. |
A phase 1 trial and pharmacokinetic study of cediranib, an orally bioavailable pan-vascular endothelial growth factor receptor inhibitor, in children and adolescents with refractory solid tumors. | PURPOSE
To determine the toxicity profile, dose-limiting toxicities (DLTs), maximum-tolerated dose (MTD), pharmacokinetics, and pharmacodynamics of cediranib administered orally, once daily, continuously in children and adolescents with solid tumors.
PATIENTS AND METHODS
Children and adolescents with refractory solid tumors, excluding primary brain tumors, were eligible. DLT at the starting dose of 12 mg/m(2)/d resulted in de-escalation to 8 mg/m(2)/d and subsequent re-escalation to 12 and 17 mg/m(2)/d. Pharmacokinetic and pharmacodynamic studies were performed during cycle 1. Response was evaluated using WHO criteria.
RESULTS
Sixteen patients (median age, 15 years; range, 8 to 18 years) were evaluable for toxicity. DLTs (grade 3 nausea, vomiting, fatigue in one; hypertension and prolonged corrected QT interval in another) occurred in patients initially enrolled at 12 mg/m(2)/d. Subsequently, 8 mg/m(2)/d was well tolerated in three patients. An additional seven patients were enrolled at 12 mg/m(2)/d; one had DLT (grade 3 diarrhea). At 17 mg/m(2)/d, two of four patients had DLTs (grade 3 nausea; intolerable grade 2 fatigue). Non-dose-limiting toxicities included left ventricular dysfunction, elevated thyroid stimulating hormone, palmar-plantar erythrodysesthesia, weight loss, and headache. The MTD of cediranib was 12 mg/m(2)/d (adult fixed dose equivalent, 20 mg). At 12 mg/m(2)/d, the median area under the plasma concentration-time curve extrapolated to infinity (AUC(0-∞)) was 900 ng·h/mL, which is similar to adults receiving 20 mg. Objective responses were observed in patients with Ewing sarcoma, synovial sarcoma, and osteosarcoma.
CONCLUSION
The recommended monotherapy dose of cediranib for children with extracranial solid tumors is 12 mg/m(2)/d administered orally, once daily, continuously. A phase II study is in development. |
Visual illusions, delayed grasping, and memory: No shift from dorsal to ventral control | We tested whether a delay between stimulus presentation and grasping leads to a shift from dorsal to ventral control of the movement, as suggested by the perception-action theory of Milner and Goodale (Milner, A.D., & Goodale, M.A. (1995). The visual brain in action. Oxford: Oxford University Press.). In this theory the dorsal cortical stream has a short memory, such that after a few seconds the dorsal information is decayed and the action is guided by the ventral stream. Accordingly, grasping should become responsive to certain visual illusions after a delay (because only the ventral stream is assumed to be deceived by these illusions). We used the Müller-Lyer illusion, the typical illusion in this area of research, and replicated the increase of the motor illusion after a delay. However, we found that this increase is not due to memory demands but to the availability of visual feedback during movement execution which leads to online corrections of the movement. Because such online corrections are to be expected if the movement is guided by one single representation of object size, we conclude that there is no evidence for a shift from dorsal to ventral control in delayed grasping of the Müller-Lyer illusion. We also performed the first empirical test of a critique Goodale (Goodale, M.A. (2006, October 27). Visual duplicity: Action without perception in the human visual system. The XIV. Kanizsa lecture, Triest, Italy.) raised against studies finding illusion effects in grasping: Goodale argued that these studies used methods that lead to unnatural grasping which is guided by the ventral stream. Therefore, these studies might never have measured the dorsal stream, but always the ventral stream. We found clear evidence against this conjecture. |
Applied machine vision of plants: a review with implications for field deployment in automated farming operations | Automated visual assessment of plant condition, specifically foliage wilting, reflectance and growth parameters, using machine vision has potential use as input for real-time variable-rate irrigation and fertigation systems in precision agriculture. This paper reviews the research literature for both outdoor and indoor applications of machine vision of plants, which reveals that different environments necessitate varying levels of complexity in both apparatus and nature of plant measurement which can be achieved. Deployment of systems to the field environment in precision agriculture applications presents the challenge of overcoming image variation caused by the diurnal and seasonal variation of sunlight. From the literature reviewed, it is argued that augmenting a monocular RGB vision system with additional sensing techniques potentially reduces image analysis complexity while enhancing system robustness to environmental variables. Therefore, machine vision systems with a foundation in optical and lighting design may potentially expedite the transition from laboratory and research prototype to robust field tool. |
Modeling in Event-B - System and Software Engineering | A practical text suitable for an introductory or advanced course in formal methods, this book presents a mathematical approach to modeling and designing systems using an extension of the B formalism: Event-B. Based on the idea of refinement, the author’s systematic approach allows the user to construct models gradually and to facilitate a systematic reasoning method by means of proofs. Readers will learn how to build models of programs and, more generally, discrete systems, but this is all done with practice in mind. The numerous examples provided arise from various sources of computer system developments, including sequential programs, concurrent programs, and electronic circuits. The book also contains a large number of exercises and projects ranging in difficulty. Each of the examples included in the book has been proved using the Rodin Platform tool set, which is available free for download at www.event-b.org. |
Perceived supervisor support: contributions to perceived organizational support and employee retention. | Three studies investigated the relationships among employees' perception of supervisor support (PSS), perceived organizational support (POS), and employee turnover. Study 1 found, with 314 employees drawn from a variety of organizations, that PSS was positively related to temporal change in POS, suggesting that PSS leads to POS. Study 2 established, with 300 retail sales employees, that the PSS-POS relationship increased with perceived supervisor status in the organization. Study 3 found, with 493 retail sales employees, evidence consistent with the view that POS completely mediated a negative relationship between PSS and employee turnover. These studies suggest that supervisors, to the extent that they are identified with the organization, contribute to POS and, ultimately, to job retention. |
Named Entity Recognition for Vietnamese | Named Entity Recognition is an important task but is still relatively new for Vietnamese. It is partly due to the lack of a large annotated corpus. In this paper, we present a systematic approach in building a named entity annotated corpus while at the same time building rules to recognize Vietnamese named entities. The resulting open source system achieves an F-measure of 83%, which is better compared to existing Vietnamese NER systems. © 2010 Springer-Verlag Berlin Heidelberg. Index |
Variability of Baobab (Adansonia digitata L.) fruits’ physical characteristics and nutrient content in the West African Sahel | The present study was carried out to evaluate variability in fruit characteristics and nutritional quality of Baobab fruits with the aim of providing the background to select trees bearing fruit with desirable characteristics for further utilisation. Vitamin C, total sugar and ash contents were assessed in 178 Baobab fruit samples from 11 sites in Burkina Faso, Mali and Niger. Furthermore the following tree and fruit physical characteristics were recorded: tree height, bark colour, fruit size, pulp weight, seed weight, seed size and pulp colour. The content (mean ± SD) of vitamin C was 4.78 ± 1.02 g kg−1, sugar 514 ± 72 g kg−1 and fruit weight 293 ± 96 g. There was a significant correlation between annual precipitation of the tree population site and vitamin C content but not with sugar content. For sugar, there were significant positive correlations with latitude and longitude. Negative correlations were found between fruit size and both longitude and latitude with smaller fruits generally being found to the north/east. No relation was found between pulp or bark colour and the sugar or vitamin C content. The contents of protein, lipid, carbohydrates, ash and moisture in the seeds ranged from 156 to 159, 143 to 150, 641 to 652, 44 to 49 and 50 to 55.7 g kg−1 respectively. The variation for vitamin C and sugar found within populations is a first indication that valuable gains could be made by selection of superior trees. |
Inductively coupled plasma mass spectroscopy quantitation of platinum-DNA adducts in peripheral blood leukocytes of patients receiving cisplatin- or carboplatin-based chemotherapy. | Platinum-DNA adducts can be assayed in peripheral blood leukocytes by means of atomic absorption spectroscopy and ELISA, and high adduct levels have been correlated previously with favorable clinical response to platinum-based chemotherapy. Our purpose was to study adduct formation in peripheral blood leukocytes by means of a new method, inductively coupled plasma mass spectroscopy (ICP-MS), and to correlate adduct formation with clinical response and toxicity. Platinum (Pt)-DNA adducts were measured by means of ICP-MS in leukocytes of 66 patients receiving a cisplatin- or carboplatin-based chemotherapy, collected either before the beginning of treatment and incubated in vitro with cisplatin or 1 and 24 h after the administration of drug to the patient. The Pt-DNA adduct level in leukocytes from patients exposed to drug in vitro was 14.33 +/- 14.71 fmol/microgram DNA (mean +/- SD), which was not significantly different from the value of 23.4 +/- 19.53 fmol/microgram DNA observed in leukocytes from nine healthy volunteers. In samples collected after the administration of chemotherapy, Pt-DNA adducts ranged from 1.91 +/- 3.59 fmol/microgram DNA (mean +/- SD) at the 1-h time point to 2.61 +/- 3.35 fmol/microgram DNA at 24 h (P > 0.05). Adduct levels in leukocytes exposed in vitro did not correlate with adduct levels from patients treated with cisplatin-based chemotherapy (r = 0.085 and 0.011 at 1 and 24 h, respectively). At 24 h, adduct levels in patients receiving cisplatin (3.15 +/- 3.64 fmol/microgram DNA, mean +/- SD) were significantly higher (P = 0.02) than those observed in patients treated with standard dose carboplatin (0.57 +/- 0.73 fmol/microgram DNA) and also higher than those in patients receiving high-dose carboplatin (1.18 +/- 1.06 fmol/microgram DNA), although the latter difference did not reach statistical significance (P = 0.071). No differences in adduct levels (mean +/- SD) were evident between patients responsive (3.23 +/- 3.51 fmol/microgram DNA) and nonresponsive (2.34 +/- 3.01 fmol/microgram DNA) to chemotherapy. In the homogeneous group of patients treated with combination of cisplatin and 5FU, received dose intensity, hemoglobin decrease, and posttreatment creatinine could not be linked with the extent of leukocyte adduct formation. The data presented here demonstrate that ICP-MS allows the detection of adducts in patients treated with cisplatin or carboplatin and suggest that adduct formation in leukocytes is not a major determinant of response or toxicity. |
Knowledge as a Teacher: Knowledge-Guided Structural Attention Networks | Natural language understanding (NLU) is a core component of a spoken dialogue system. Recently recurrent neural networks (RNN) obtained strong results on NLU due to their superior ability of preserving sequential information over time. Traditionally, the NLU module tags semantic slots for utterances considering their flat structures, as the underlying RNN structure is a linear chain. However, natural language exhibits linguistic properties that provide rich, structured information for better understanding. This paper introduces a novel model, knowledge-guided structural attention networks (K-SAN), a generalization of RNN to additionally incorporate non-flat network topologies guided by prior knowledge. There are two characteristics: 1) important substructures can be captured from small training data, allowing the model to generalize to previously unseen test data; 2) the model automatically figures out the salient substructures that are essential to predict the semantic tags of the given sentences, so that the understanding performance can be improved. The experiments on the benchmark Air Travel Information System (ATIS) data show that the proposed K-SAN architecture can effectively extract salient knowledge from substructures with an attention mechanism, and outperform the performance of the state-of-the-art neural network based frameworks. |
Topic sentiment mixture: modeling facets and opinions in weblogs | In this paper, we define the problem of topic-sentiment analysis on Weblogs and propose a novel probabilistic model to capture the mixture of topics and sentiments simultaneously. The proposed Topic-Sentiment Mixture (TSM) model can reveal the latent topical facets in a Weblog collection, the subtopics in the results of an ad hoc query, and their associated sentiments. It could also provide general sentiment models that are applicable to any ad hoc topics. With a specifically designed HMM structure, the sentiment models and topic models estimated with TSM can be utilized to extract topic life cycles and sentiment dynamics. Empirical experiments on different Weblog datasets show that this approach is effective for modeling the topic facets and sentiments and extracting their dynamics from Weblog collections. The TSM model is quite general; it can be applied to any text collections with a mixture of topics and sentiments, thus has many potential applications, such as search result summarization, opinion tracking, and user behavior prediction. |
Digital Image Enhancement and Noise Filtering by Use of Local Statistics | Computational techniques involving contrast enhancement and noise filtering on two-dimensional image arrays are developed based on their local mean and variance. These algorithms are nonrecursive and do not require the use of any kind of transform. They share the same characteristics in that each pixel is processed independently. Consequently, this approach has an obvious advantage when used in real-time digital image processing applications and where a parallel processor can be used. For both the additive and multiplicative cases, the a priori mean and variance of each pixel is derived from its local mean and variance. Then, the minimum mean-square error estimator in its simplest form is applied to obtain the noise filtering algorithms. For multiplicative noise a statistical optimal linear approximation is made. Experimental results show that such an assumption yields a very effective filtering algorithm. Examples on images containing 256 × 256 pixels are given. Results show that in most cases the techniques developed in this paper are readily adaptable to real-time image processing. |
Formal verification of UML-modeled machine controls | Programmable logic controllers (PLCs) are applied in a wide field of application and, especially, for safety-critical controls. Thus, there is the demand for high reliability of PLCs. Moreover, the increasing complexity of the PLC programs and the short time-to-market are hard to cope with. Formal verification techniques such as model checking allow for proving whether a PLC program meets its specification. However, the manual formalization of PLC programs is error-prone and time-consuming. This paper presents a novel approach to apply model checking to machine controls. The PLC program is modeled in form of Unified Modeling Language (UML) state-charts that serve as the input to our tool that automatically generates a corresponding formal model for the model checker NuSMV. We evaluate the capabilities of the proposed approach on an industrial machine control. |
Quantifying Political Polarity Based on Bipartite Opinion Networks | Political inclinations of individuals (liberal vs. conservative) largely shape their opinions on several issues such as abortion, gun control, nuclear power, etc. These opinions are openly exerted in online forums, news sites, the parliament, and so on. In this paper, we address the problem of quantifying political polarity of individuals and of political issues for classification and ranking. We use signed bipartite networks to represent the opinions of individuals on issues, and formulate the problem as a node classification task. We propose a linear algorithm that exploits network effects to learn both the polarity labels as well as the rankings of people and issues in a completely unsupervised manner. Through extensive experiments we demonstrate that our proposed method provides an effective, fast, and easy-to-implement solution, while outperforming three existing baseline algorithms adapted to signed networks, on real political forum and US Congress datasets. Experiments on a wide variety of synthetic graphs with varying polarity and degree distributions of the nodes further demonstrate the robustness of our approach. Introduction Many individuals use online media to exert their opinions on a variety of topics. Hotly debated topics include liberal vs. conservative policies such as tax cuts and gun control, social issues such as abortion and same-sex marriage, environmental issues such as climate change and nuclear power plants, etc. These openly debated issues in blogs, forums, and news websites shape the nature of public opinion and affect the direction of politics, media, and public policy. In this paper, we address the problem of quantifying political polarity in opinion datasets. Given a set of individuals from two opposing camps (liberal vs. conservative) debating a set of issues or exerting opinions on a set of subjects (e.g. human subjects, political issues, congressional bills), we aim to address two problems: (1) classify which person lies in which camp, and which subjects are favored by each camp; and (2) rank the people and the subjects by the magnitude or extent of their polarity. Here while the classification enables us to determine the two camps, ranking helps us understand the extremity to which a person/subject is polarized; e.g. same-sex marriage may be highly polarized among the Copyright c © 2014, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. two camps (liberals being strictly in favor, and conservatives strictly being against), while gun control may not belong to a camp fully favoring or opposing it (i.e., is less polarized). Ranking also helps differentiate moderate vs. extreme individuals, as well as unifying vs. polarizing subjects; such as (unifying) bills voted in the same way, e.g. all ‘yea’ by the majority of congressmen vs. (polarizing) bills that are voted quite oppositely by the two camps. A large body of prior research focuses on sentiment analysis on politically oriented text (Cohen and Ruths 2013; Conover et al. 2011b; 2011a; Pak and Paroubek 2010; Tumasjan et al. 2010). The main goal of these works is to classify political text. In this work, on the other hand, we deal with network data to classify its nodes. Moreover, these methods mostly rely on supervised techniques whereas we focus on un/semi-supervised classification. Other prior research on polarization have exploited link mining and graph clustering to study the social structure on social media networks (Adamic and Glance 2005; Livne et al. 2011; Conover et al. 2011b; Guerra et al. 2013) where the edges depict the ‘mention’ or ‘hyperlink’ relations and not opinions. Moreover, these works do not perform ranking. Different from previous works, our key approach is to exploit network effects to both classify and rank individuals and political subjects by their polarity. The opinion datasets can be effectively represented as signed bipartite networks, where one set of nodes represent individuals, the other set represent subjects, and signed edges between the individuals and subjects depict the +/− opinions. As such, we cast the problem as a node classification task on such networks. Our main contributions can be summarized as follows: • We cast the political polarity classification and ranking problem as a node classification task on edge-signed bipartite opinion networks. • We propose an algorithm, called signed polarity propagation (SPP), that computes probabilities (i.e., polarity scores) of nodes of belonging to one of two classes (e.g., liberal vs. conservative), and use these scores for classification and ranking. Our method is easy-to-implement and fast—running time grows linearly with network size. • We show the effectiveness of our algorithm, in terms of both prediction and ranking, on synthetic and real datasets with ground truth from the US Congress and Political Forum. Further, we modify three existing algorithms to handle signed networks, and compare them to SPP. Our experiments reveal the advantages and robustness of our method on diverse settings with various polarity and degree distributions. Related Work Scoring and ranking the nodes of a graph based on the network structure has been studied extensively, with wellknown algorithms like PageRank (Brin and Page 1998), and HITS (Kleinberg 1998). These, however, are applied on graphs where the edges are unsigned and therefore cannot be directly used to compute polarity scores. Computing polarity scores can be cast as a network classification problem, where the task is to assign probabilities (i.e., scores) to nodes of belonging to one of two classes, which is the main approach we take in our work. There exist a large body of work on network-based classification (Getoor et al. 2007; Neville and Jensen 2003; Macskassy and Provost 2003). Semi-supervised algorithms based on network propagation have also been used in classifying political orientation (Lin and Cohen 2008; Zhou, Resnick, and Mei 2011). However, all the existing methods work with unsigned graphs in which the edges do not represent opinions but simply relational connections such as HTML hyperlinks between blog articles or ‘mention’ relations in Twitter. Signed networks have only recently attracted attention (Leskovec, Huttenlocher, and Kleinberg 2010b). Most existing studies focused on tackling the edge sign prediction problem (Yang et al. 2012; Chiang et al. 2011; Leskovec, Huttenlocher, and Kleinberg 2010a; Symeonidis, Tiakas, and Manolopoulos 2010). Other works include the study of trust/distrust propagation (Guha et al. 2004; DuBois, Golbeck, and Srinivasan 2011; Huang et al. 2012), and product/merchant quality estimation from reviews (McGlohon, Glance, and Reiter 2010). These works do not address the problem of classifying and ranking nodes in signed graphs. With respect to studies on political orientation through social media, (Adamic and Glance 2005; Adar et al. 2004) use link mining and graph clustering to analyze political communities in the blogosphere. While these and most clustering algorithms are designed to work with unsigned graphs, there also exist approaches for clustering signed graphs (Traag and Bruggeman 2009; Lo et al. 2013; Zhang et al. 2013). Clustering, however, falls short in scoring the nodes and hence quantifying polarity for ranking. Related, (Livne et al. 2011) utilize graph and text mining techniques to analyze differences between political parties and their online media usage in conveying a cohesive message. Most recently, (Cohen and Ruths 2013) use supervised classification techniques to classify three groups of Twitter users with varying political activity (figures, active, and modest) by their political orientation. Other works that exploit supervised classification using text features include (Conover et al. 2011a; Golbeck and Hansen 2011; Pennacchiotti and Popescu 2011). Similar to (Adamic and Glance 2005), there exist related works that study the social structure for measuring polarity. These works rely on the existence of (unsigned) social links between the users of social media and study the communities induced by polarized debate; an immediate consequence of the homophily principle, which states that people with similar beliefs and opinions tend to establish social ties. (Livne et al. 2011) and (Conover et al. 2011b) both use modularity (Newman 2006) as a measure of segregation between political groups in Twitter. (Guerra et al. 2013) compare modularity of polarized and non-polarized networks and propose two new measures of polarity based on community boundaries. Again, these works do not study the opinion networks with signed edges. As our main task is quantifying polarity, work on sentiment analysis is also related. There exist a long list of works on sentiment and polarity prediction in political text (Tumasjan et al. 2010; Awadallah, Ramanath, and Weikum 2010; Conover et al. 2011b; He et al. 2012), as well as in tweets, blogs, and news articles (Pak and Paroubek 2010; Godbole, Srinivasaiah, and Skiena 2007; Thomas, Pang, and Lee 2006; Balasubramanyan et al. 2012). These differ from our work as they use text-based sentiment analysis, while we focus on the network effects in signed graphs. Proposed Method Problem Overview We consider the problem of polarity prediction and ranking in opinion datasets (e.g. forums, blogs, the congress). Opinion datasets mainly consist of a set of people (e.g. users in a forum, representatives in The House) and a set of subjects (e.g. political issues, political people, congressional bills). Each person often maintains a positive or negative opinion toward a particular subject (e.g. a representative votes ‘yes’ for a bill, a person ‘likes’ a political individual). This opinion is often an exposition of the person’s latent political leaning—liberal or conservative. For example, we could think of a person with strong negative opinion toward gay&lesbian rights to be more conservati |
No Peek: A Survey of private distributed deep learning | We survey distributed deep learning models for training or inference without accessing raw data from clients. These methods aim to protect confidential patterns in data while still allowing servers to train models. The distributed deep learning methods of federated learning, split learning and large batch stochastic gradient descent are compared in addition to private and secure approaches of differential privacy, homomorphic encryption, oblivious transfer and garbled circuits in the context of neural networks. We study their benefits, limitations and trade-offs with regards to computational resources, data leakage and communication efficiency and also share our anticipated future trends. |
3D AVM system for automotive applications | The car body structure and vision limitations of the rear view mirror may lead to inner wheel difference and blind spots, which results in traffic accidents. To solve this problem, many automakers develop the Around View Monitoring (AVM) actively. The AVM system provides the information around the car for drivers with videos. This paper is based on the 2D AMV and further develops the 3D AVM, which uses four fisheye cameras to set in the front, in the rear, and in the rear view mirrors of both sides to record videos. This paper applies fisheye correction, perspective transform, image stitching, and scale-invariant feature transform to synthesize four input videos into 2D AVM, and the 3D AVM is created by 2D AVM to 3D AVM algorithm. Compared to the 2D AVM in previous methods, the 3D AVM proposed in this paper can help drivers to determine the collision distance while crossing with another car in narrow roads and parking, which can prevent traffic accidents. Moreover, drivers can switch perspectives arbitrarily according to the requirements without the limitation of fixed overlooking view. Therefore, the driving safety is improved. |
Thimerosal exposure in infants and developmental disorders: a retrospective cohort study in the United kingdom does not support a causal association. | OBJECTIVE
After concerns about the possible toxicity of thimerosal-containing vaccines in the United States, this study was designed to investigate whether there is a relationship between the amount of thimerosal that an infant receives via diphtheria-tetanus-whole-cell pertussis (DTP) or diphtheria-tetanus (DT) vaccination at a young age and subsequent neurodevelopmental disorders.
METHODS
A retrospective cohort study was performed using 109 863 children who were born from 1988 to 1997 and were registered in general practices in the United Kingdom that contributed to a research database. The disorders investigated were general developmental disorders, language or speech delay, tics, attention-deficit disorder, autism, unspecified developmental delays, behavior problems, encopresis, and enuresis. Exposure was defined according to the number of DTP/DT doses received by 3 and 4 months of age and also the cumulative age-specific DTP/DT exposure by 6 months. Each DTP/DT dose of vaccine contains 50 microg of thimerosal (25 microg of ethyl mercury). Hazard ratios (HRs) for the disorders were calculated per dose of DTP/DT vaccine or per unit of cumulative DTP/DT exposure.
RESULTS
Only in 1 analysis for tics was there some evidence of a higher risk with increasing doses (Cox's HR: 1.50 per dose at 4 months; 95% confidence interval [CI]: 1.02-2.20). Statistically significant negative associations with increasing doses at 4 months were found for general developmental disorders (HR: 0.87; 95% CI: 0.81-0.93), unspecified developmental delay (HR: 0.80; 95% CI: 0.69-0.92), and attention-deficit disorder (HR: 0.79; 95% CI: 0.64-0.98). For the other disorders, there was no evidence of an association with thimerosal exposure.
CONCLUSIONS
With the possible exception of tics, there was no evidence that thimerosal exposure via DTP/DT vaccines causes neurodevelopmental disorders. |
Branched-chain amino acid levels are associated with improvement in insulin resistance with weight loss | Insulin resistance (IR) improves with weight loss, but this response is heterogeneous. We hypothesised that metabolomic profiling would identify biomarkers predicting changes in IR with weight loss. Targeted mass spectrometry-based profiling of 60 metabolites, plus biochemical assays of NEFA, β-hydroxybutyrate, ketones, insulin and glucose were performed in baseline and 6 month plasma samples from 500 participants who had lost ≥4 kg during Phase I of the Weight Loss Maintenance (WLM) trial. Homeostatic model assessment of insulin resistance (HOMA-IR) and change in HOMA-IR with weight loss (∆HOMA-IR) were calculated. Principal components analysis (PCA) and mixed models adjusted for race, sex, baseline weight, and amount of weight loss were used; findings were validated in an independent cohort of patients (n = 22). Mean weight loss was 8.67 ± 4.28 kg; mean ∆HOMA-IR was −0.80 ± 1.73, range −28.9 to 4.82). Baseline PCA-derived factor 3 (branched chain amino acids [BCAAs] and associated catabolites) correlated with baseline HOMA-IR (r = 0.50, p < 0.0001) and independently associated with ∆HOMA-IR (p < 0.0001). ∆HOMA-IR increased in a linear fashion with increasing baseline factor 3 quartiles. Amount of weight loss was only modestly correlated with ∆HOMA-IR (r = 0.24). These findings were validated in the independent cohort, with a factor composed of BCAAs and related metabolites predicting ∆HOMA-IR (p = 0.007). A cluster of metabolites comprising BCAAs and related analytes predicts improvement in HOMA-IR independent of the amount of weight lost. These results may help identify individuals most likely to benefit from moderate weight loss and elucidate novel mechanisms of IR in obesity. |
Food Image Recognition by Using Convolutional Neural Networks (CNNs) | Food image recognition is one of the promising applications of visual object recognition in computer vision. In this study, a small-scale dataset consisting of 5822 images of ten categories and a five-layer CNN was constructed to recognize these images. The bag-of-features (BoF) model coupled with support vector machine was first tested as comparison, resulting in an overall accuracy of 56%; while the CNN performed much better with an overall accuracy of 74%. Data expansion techniques were applied to increase the size of training images, which achieved a significantly improved accuracy of more than 90% and prevent the overfitting issue that occurred to the CNN without using data expansion. Further improvement is within reach by collecting more images and optimizing the network architecture and relevant hyper-parameters. |
Measuring the quality of web content using factual information | Nowadays, many decisions are based on information found in the Web. For the most part, the disseminating sources are not certified, and hence an assessment of the quality and credibility of Web content became more important than ever. With factual density we present a simple statistical quality measure that is based on facts extracted from Web content using Open Information Extraction. In a first case study, we use this measure to identify featured/good articles in Wikipedia. We compare the factual density measure with word count, a measure that has successfully been applied to this task in the past. Our evaluation corroborates the good performance of word count in Wikipedia since featured/good articles are often longer than non-featured. However, for articles of similar lengths the word count measure fails while factual density can separate between them with an F-measure of 90.4%. We also investigate the use of relational features for categorizing Wikipedia articles into featured/good versus non-featured ones. If articles have similar lengths, we achieve an F-measure of 86.7% and 84% otherwise. |
Evidence for infants' internal working models of attachment. | Nearly half a century ago, psychiatrist John Bowlby proposed that the instinctual behavioral system that underpins an infant’s attachment to his or her mother is accompanied by ‘‘internal working models’’ of the social world—models based on the infant’s own experience with his or her caregiver (Bowlby, 1958, 1969/1982). These mental models were thought to mediate, in part, the ability of an infant to use the caregiver as a buffer against the stresses of life, as well as the later development of important self-regulatory and social skills. Hundreds of studies now testify to the impact of caregivers’ behavior on infants’ behavior and development: Infants who most easily seek and accept support from their parents are considered secure in their attachments and are more likely to have received sensitive and responsive caregiving than insecure infants; over time, they display a variety of socioemotional advantages over insecure infants (Cassidy & Shaver, 1999). Research has also shown that, at least in older children and adults, individual differences in the security of attachment are indeed related to the individual’s representations of social relations (Bretherton & Munholland, 1999). Yet no study has ever directly assessed internal working models of attachment in infancy. In the present study, we sought to do so. |
Possible ?(5S), ?(4D), ?(6S), and ?(5D) signals in \mth{\Lambda_{\rm c}\bar{\Lambda}_{\rm c}} | It is shown that the ?c+?c- signal recently reported by the Belle Collaboration (Phys. Rev. Lett., 101 (2008) 172001) contains clear signs of the ?(5S) and the ?(4D) c--> vector states, and also some indication for the masses and widths of the ?(6S) and ?(5D). Moreover, it is argued that the threshold behaviour of the ?c+?c- cross-section suggests the presence of the hitherto undetected ?(3D) state not far below the ?c+?c- threshold. |
Antibacterial activity of marine-derived fungi | A total of 227 marine isolates of ubiqituous fungi were cultivated on different media and the secondary metabolite content of the extracts (ethyl acetate/chloroform/methanol 3 : 2 : 1) characterized by HPLC. The fungi were secured from animals, plants and sediments of Venezuelan waters (0–10 m) including mangroves and lagoonal areas. The extracts were tested for antibacterial activity. A total of 7 were active towards Vibrio parahaemolyticus and 55 towards Staphylococcus aureus, representing 18 different fungal species from 8 ascomycetous genera. For 61 strains of Penicillium citrinum antibacterial activity correlated well with content of secondary metabolites as measured by HPLC. Thirteen isolates of Penicillium steckii produced very similar profiles of secondary metabolites and 6 of these had activity against either V. parahaemolyticus or S. aureus or both. |
Evaluation of trunk muscle activity in chronic low back pain patients and healthy individuals during holding loads. | OBJECTIVES
Low back pain after load-carrying is the most important disorder in musculoskeletal system and a cause of dysfunction and economic problems. Holding materials can disturb spinal stability; nevertheless, there are few researches about the pattern of trunk muscle recruitment in patients with chronic low back pain (CLBP) during load holding.
METHODS
Ten female patients with CLBP and ten matched healthy subjects participated in this study. Normalized electromyography activation of trunk muscles during holding loads was analyzed.
RESULTS
The low back pain group demonstrated significantly higher activation levels of the External oblique abdominis muscle during loading 12 kg in flexed trunk position and lower activation levels of the Internal oblique abdominis muscle during loading 6 and 12 kg in neutral trunk position than the control group. With the highest external load and trunk flexion, the electrical activity of back muscles increased significantly in both groups. With increasing load, the activation of Rectus abdominis muscle in patients with CLBP and the activation of Rectus abdominis and Internal oblique muscles were increased significantly in healthy subjects.
CONCLUSION
Higher activation of global and lower activation of local abdominal muscles in patients with CLBP may represent that pain changes neuromuscular control systems. The increased activity of extensor muscles during trunk flexion is probably needed for stability and controlling of flexion. |
Deep Multi-Task Learning for Aspect Term Extraction with Memory Interaction | We propose a novel LSTM-based deep multi-task learning framework for aspect term extraction from user review sentences. Two LSTMs equipped with extended memories and neural memory operations are designed for jointly handling the extraction tasks of aspects and opinions via memory interactions. Sentimental sentence constraint is also added for more accurate prediction via another LSTM. Experiment results over two benchmark datasets demonstrate the effectiveness of our framework. |
Tecnis Symfony Intraocular Lens with a “ Sweet Spot ” for Tolerance to Postoperative Residual Refractive Errors | Purpose: To investigate the impact of residual astigmatism on visual acuity (VA) after the implantation of a novel extended range of vision (ERV) intraocular lens (IOL) based on the correction of spherical and chromatic aberration. Method: The study enrolled 411 patients bilaterally implanted with the ERV IOL Tecnis Symfony. Visual acuity and subjective refraction were analyzed during the 4to 6-month follow-up. The sample of eyes was stratified for four groups according to the magnitude of postoperative refractive astigmatism and postoperative spherical equivalent. Results: The astigmatism analysis included 386 eyes of 193 patients with both eyes of each patient within the same cylinder range. Uncorrected VAs for distance, intermediate and near were better in the group of eyes with lower level of postoperative astigmatism, but even in eyes with residual cylinders up to 0.75 D, the loss of VA lines was clinically not relevant. The orientation of astigmatism did not seem to have an impact on the tolerance to the residual cylinder. The SE evaluation included 810 eyes of 405 patients, with both eyes of each patient in the same SE range. Uncorrected VAs for distance, intermediate and near, were very similar in all SE groups. Conclusion: Residual cylinders up to 0.75 D after the implantation of the Tecnis Symfony IOL have a very mild impact on monocular and binocular VA. The Tecnis Symfony IOL shows a good tolerance to unexpected refractive surprises and thus a better “sweet spot”. |
Automating HAZOP studies using D-higraphs | In this paper, we present the use of D-higraphs to perform HAZOP studies. D-higraphs is a formalism that includes in a single model the functional as well as the structural (ontological) components of any given system. A tool to perform a semi-automatic guided HAZOP study on a process plant is presented. The diagnostic system uses an expert system to predict the behavior modeled using D-higraphs. This work is applied to the study of an industrial case and its results are compared with other similar approaches proposed in previous studies. The analysis shows that the proposed methodology fits its purpose enabling causal reasoning that explains causes and consequences derived from deviations, it also fills some of the gaps and drawbacks existing in previous reported HAZOP assistant tools. |
Who are my neighbors?: A perception model for selecting neighbors of pedestrians in crowds | Pedestrian trajectory prediction is a challenging problem. One of the aspects that makes it so challenging is the fact that the future positions of an agent are not only determined by its previous positions, but also by the interaction of the agent with its neighbors. Previous methods, like Social Attention have considered the interactions with all agents as neighbors. However, this ends up assigning high attention weights to agents who are far away from the queried agent and/or moving in the opposite direction, even though, such agents might have little to no impact on the queried agent's trajectory. Furthermore, trajectory prediction of a queried agent involving all agents in a large crowded scenario is not efficient. In this paper, we propose a novel approach for selecting neighbors of an agent by modeling its perception as a combination of a location and a locomotion model. We demonstrate the performance of our method by comparing it with the existing state-of-the-art method on publicly available datasets. The results show that our neighbor selection model overall improves the accuracy of trajectory prediction and enables prediction in scenarios with large numbers of agents in which other methods do not scale well. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.