title
stringlengths
8
300
abstract
stringlengths
0
10k
The hymen morphology in normal newborn Saudi girls.
BACKGROUND Hymen morphology has a medico-legal importance. In view of the lack of national norms, establishing the hymen morphology of Saudi newborn infants is essential. SUBJECTS AND METHODS Over a period of 4 months, the genitalia of 345 full-term female newborn infants were examined to determine the shape of the hymen. A previously described labia traction technique was used to classify the hymen morphology into annular, sleeve-like, fimbriated, crescentric, and other types. RESULTS The hymen was present in all 345 female newborn infants examined. A total of 207 (60%) were of the annular type, 76 (22%) were sleeve-like, 43 (12.5%) fimbriated, 17 (4.9%) crescentric, and 2 (0.6%) of other types. CONCLUSION The most common hymen morphology in Saudi newborn girls was annular, followed by sleeve-like, fimbriated, and crescentric. This study may be the first to define normal configuration of the hymen in this community.
A Behavioral Investigation of Dimensionality Reduction
A cornucopia of dimensionality reduction techniques have emerged over the past decade, leaving data analysts with a wide variety of choices for reducing their data. Means of evaluating and comparing low-dimensional embeddings useful for visualization, however, are very limited. When proposing a new technique it is common to simply show rival embeddings side-by-side and let human judgment determine which embedding is superior. This study investigates whether such human embedding evaluations are reliable, i.e., whether humans tend to agree on the quality of an embedding. We also investigate what types of embedding structures humans appreciate a priori. Our results reveal that, although experts are reasonably consistent in their evaluation of embeddings, novices generally disagree on the quality of an embedding. We discuss the impact of this result on the way dimensionality reduction researchers should present their results, and on applicability of dimensionality reduction outside of machine learning.
Secretive ciliates and putative asexuality in microbial eukaryotes.
Facultative sexuality is assumed to have occurred in the ancestor of all extant eukaryotes, but the distribution and maintenance of sex among microbial eukaryotes is still under debate. In this paper, we address the purported asexuality in colpodean ciliates as an exemplary lineage. Colpodeans are a primarily terrestrial clade thought to have arisen up to 900 MYA and contain one known derived sexual species. We conclude that the putative asexuality of this lineage is an observational artifact. We suggest that the same might hold for other microbial eukaryotes, and that many are secretively sexual as well. Theoretical work from the distantly related plants and animals suggests that both the evolutionary success of ancient asexuals and the reversal of the loss of sex are highly unlikely, further suggesting that colpodeans are secretively sexual. However, it remains to be seen to what extent sexual theories and predictions derived from macro-organismic lineages apply also to microbial eukaryotes.
An architecture-level cache simulation framework supporting advanced PMA STT-MRAM
With integration density on-chip rocketing up, leakage power dominates the whole power budget of contemporary CMOS technology based memory, especially for SRAM based on-chip cache. To overcome the aggravating “power wall” issue, some emerging memory technologies such as STT-MRAM (Spin transfer torque magnetic RAM), PCRAM (Phase change RAM), and ReRAM(Resistive RAM) are proposed as promising candidates for next generation cache design. Although there are several existing simulation tools available for cache design, such as NVSim and CACTI, they either cannot support the most advanced PMA (Perpendicular magnetic anisotropy) STT-MRAM model or lack the ability for multi-banked large capacity cache simulation. In this paper, we propose an architecture level design framework for cache design from device level up to array structure level, which can support the most advanced PMA STT-MRAM technology. The simulation results are analyzed and compared with those produced by NVSim, which prove the correctness of our framework. The potential benefits of PMA STT-MRAM used as multi-banked L2 and L3 cache are also investigated in the paper. We believe that our framework will be helpful for computer architecture researchers to adopt the PMA STT-MRAM in on-chip cache design.
Influencing choice without awareness
Forcing occurs when a magician influences the audience's decisions without their awareness. To investigate the mechanisms behind this effect, we examined several stimulus and personality predictors. In Study 1, a magician flipped through a deck of playing cards while participants were asked to choose one. Although the magician could influence the choice almost every time (98%), relatively few (9%) noticed this influence. In Study 2, participants observed rapid series of cards on a computer, with one target card shown longer than the rest. We expected people would tend to choose this card without noticing that it was shown longest. Both stimulus and personality factors predicted the choice of card, depending on whether the influence was noticed. These results show that combining real-world and laboratory research can be a powerful way to study magic and can provide new methods to study the feeling of free will.
Structure Learning of Bayesian Belief Networks Using Simulated Annealing Algorithm
Basically, Bayesian Belief Networks (BBNs) as probabilistic tools provide suitable facilities for modelling process under uncertainty. A BBN applies a Directed Acyclic Graph (DAG) for encoding relations between all variables in state of problem. Finding the beststructure (structure learning) ofthe DAG is a classic NP-Hard problem in BBNs. In recent years, several algorithms are proposed for this task such as Hill Climbing, Greedy Thick Thinning and K2 search. In this paper, we introduced Simulated Annealing algorithm with complete details as new method for BBNs structure learning. Finally, proposed algorithm compared with other structure learning algorithms based on classification accuracy and construction time on valuable databases. Experimental results of research show that the simulated annealing algorithmis the bestalgorithmfrom the point ofconstructiontime but needs to more attention for classification process.
Do LoRa Low-Power Wide-Area Networks Scale?
New Internet of Things (IoT) technologies such as Long Range (LoRa) are emerging which enable power efficient wireless communication over very long distances. Devices typically communicate directly to a sink node which removes the need of constructing and maintaining a complex multi-hop network. Given the fact that a wide area is covered and that all devices communicate directly to a few sink nodes a large number of nodes have to share the communication medium. LoRa provides for this reason a range of communication options (centre frequency, spreading factor, bandwidth, coding rates) from which a transmitter can choose. Many combination settings are orthogonal and provide simultaneous collision free communications. Nevertheless, there is a limit regarding the number of transmitters a LoRa system can support. In this paper we investigate the capacity limits of LoRa networks. Using experiments we develop models describing LoRa communication behaviour. We use these models to parameterise a LoRa simulation to study scalability. Our experiments show that a typical smart city deployment can support 120 nodes per 3.8 ha, which is not sufficient for future IoT deployments. LoRa networks can scale quite well, however, if they use dynamic communication parameter selection and/or multiple sinks.
Two-Handed Gestures for Human-Computer Interaction
The present thesis is concerned with the development and evaluation (in terms of accuracy and utility) of systems using hand postures and hand gestures for enhanced Human-Computer Interaction (HCI). In our case, these systems are based on vision techniques, thus only requiring cameras, and no other specific sensors or devices. When dealing with hand movements, it is necessary to distinguish two aspects of these hand movements : the static aspect and the dynamic aspect. The static aspect is characterized by a pose or configuration of the hand in an image and is related to the Hand Posture Recognition (HPR) problem. The dynamic aspect is defined either by the trajectory of the hand, or by a series of hand postures in a sequence of images. This second aspect is related to the Hand Gesture Recognition (HGR) task. Given the recognized lack of common evaluation databases in the HGR field, a first contribution of this thesis was the collection and public distribution of two databases, containing both oneand two-handed gestures, which part of the results reported here will be based upon. On these databases, we compare two state-of-the-art models for the task of HGR. As a second contribution, we propose a HPR technique based on a new feature extraction. This method has the advantage of being faster than conventional methods while yielding good performances. In addition, we provide comparison results of this method with other state-of-the-art technique. Finally, the most important contribution of this thesis lies in the thorough study of the state-of-the-art not only in HGR and HPR but also more generally in the field of HCI. The first chapter of the thesis provides an extended study of the state-of-the-art. The second chapter of this thesis contributes to HPR. We propose to apply for HPR a technique employed with success for face detection. This method is based on the Modified Census Transform (MCT) to extract relevant features in images. We evaluate this technique on an existing benchmark database and provide comparison results with other state-of-the-art approaches. The third chapter is related to HGR. In this chapter we describe the first recorded database, containing both oneand two-handed gestures in the 3D space. We propose to compare two models used with success in HGR, namely Hidden Markov Models (HMM) and Input-Output Hidden Markov Model (IOHMM). The fourth chapter is also focused on HGR but more precisely on two-handed gesture recognition. For that purpose, a second database has been recorded using two cameras. The goal of these gestures is to manipulate virtual objects on a screen. We propose to investigate on this second database the state-of-the-art sequence processing techniques we used in the previous chapter. We then discuss the results obtained using different features, and using images of one or two cameras. In conclusion, we propose a method for HPR based on new feature extraction. For HGR, we provide two databases and comparison results of two major sequence processing techniques. Finally, we present a complete survey on recent state-of-the-art techniques for both HPR and HGR. We also present some possible applications of these techniques, applied to two-handed gesture interaction. We hope this research will open new directions in the field of hand posture and gesture recognition.
Painterly rendering techniques: a state-of-the-art review of current approaches
In this publication we will look at the different methods presented over the past few decades which attempt to recreate digital paintings. While previous surveys concentrate on the broader subject of non-photorealistic rendering, the focus of this paper is firmly placed on painterly rendering techniques. We compare different methods used to produce different output painting styles such as abstract, colour pencil, watercolour, oriental, oil and pastel. Whereas some methods demand a high level of interaction using a skilled artist, others require simple parameters provided by a user with little or no artistic experience. Many methods attempt to provide more automation with the use of varying forms of reference data. This reference data can range from still photographs, video, 3D polygonal meshes or even 3D point clouds. The techniques presented here endeavour to provide tools and styles that are not traditionally available to an artist. Copyright © 2012 John Wiley & Sons, Ltd.
Peace Journalism : Between advocacy journalism and constructive conflict coverage
The professional norms of good journalism include in particular the following: truthfulness, objectivity, neutrality and detachment. For Public Relations these norms are at best irrelevant. The only thing that matters is success. And this success is measured in terms ofachieving specific communication aims which are "externally defined by a client, host organization or particular groups ofstakeholders" (Hanitzsch, 2007, p. 2). Typical aims are, e.g., to convince the public of the attractiveness of a product, of the justice of one's own political goals or also of the wrongfulness of a political opponent.
Improving Distributed Representation of Word Sense via WordNet Gloss Composition and Context Clustering
In recent years, there has been an increasing interest in learning a distributed representation of word sense. Traditional context clustering based models usually require careful tuning of model parameters, and typically perform worse on infrequent word senses. This paper presents a novel approach which addresses these limitations by first initializing the word sense embeddings through learning sentencelevel embeddings from WordNet glosses using a convolutional neural networks. The initialized word sense embeddings are used by a context clustering based model to generate the distributed representations of word senses. Our learned representations outperform the publicly available embeddings on 2 out of 4 metrics in the word similarity task, and 6 out of 13 sub tasks in the analogical reasoning task.
Design and Evaluation of Metaphor Processing Systems
System design and evaluation methodologies receive significant attention in natural language processing (NLP), with the systems typically being evaluated on a common task and against shared data sets. This enables direct system comparison and facilitates progress in the field. However, computational work on metaphor is considerably more fragmented than similar research efforts in other areas of NLP and semantics. Recent years have seen a growing interest in computational modeling of metaphor, with many new statistical techniques opening routes for improving system accuracy and robustness. However, the lack of a common task definition, shared data set, and evaluation strategy makes the methods hard to compare, and thus hampers our progress as a community in this area. The goal of this article is to review the system features and evaluation strategies that have been proposed for the metaphor processing task, and to analyze their benefits and downsides, with the aim of identifying the desired properties of metaphor processing systems and a set of requirements for their evaluation.
Exploiting Perceptual Anchoring for Color Image Enhancement
The preservation of image quality under various display conditions becomes more and more important in the multimedia era. A considerable amount of effort has been devoted to compensating the quality degradation caused by dim LCD backlight for mobile devices and desktop monitors. However, most previous enhancement methods for backlight-scaled images only consider the luminance component and overlook the impact of color appearance on image quality. In this paper, we propose a fast and elegant method that exploits the anchoring property of human visual system to preserve the color appearance of backlight-scaled images as much as possible. Our approach is distinguished from previous ones in many aspects. First, it has a sound theoretical basis. Second, it takes the luminance and chrominance components into account in an integral manner. Third, it has low complexity and can process 720p high-definition videos at 35 frames per second without flicker. The superior performance of the proposed method is verified through psychophysical tests.
On pricing discrete barrier options using conditional expectation and importance sampling Monte Carlo
Estimators for the price of a discrete barrier option based on conditional expectation and importance sampling variance reduction techniques are given. There are erroneous formulas for the conditional expectation estimator published in the literature: we derive the correct expression for the estimator. We use a simulated annealing algorithm to estimate the optimal parameters of exponential twisting in importance sampling, and compare it with a heuristic used in the literature. Randomized quasi-Monte Carlo methods are used to further increase the accuracy of the estimators. c © 2007 Elsevier Ltd. All rights reserved.
Deliberative Systems: Democratizing deliberative systems
'Deliberative democracy' is often dismissed as a set of small-scale, academic experiments. This volume seeks to demonstrate how the deliberative ideal can work as a theory of democracy on a larger scale. It provides a new way of thinking about democratic engagement across the spectrum of political action, from towns and villages to nation states, and from local networks to transnational, even global systems. Written by a team of the world's leading deliberative theorists, Deliberative Systems explains the principles of this new approach, which seeks ways of ensuring that a division of deliberative labour in a system nonetheless meets both deliberative and democratic norms. Rather than simply elaborating the theory, the contributors examine the problems of implementation in a real world of competing norms, competing institutions and competing powerful interests. This pioneering book will inspire an exciting new phase of deliberative research, both theoretical and empirical.
A combination of levobupivacaine and lidocaine for paravertebral block in breast cancer patients undergoing quadrantectomy causes greater hemodynamic oscillations than levobupivacaine alone
AIM To test for differences in hemodynamic and analgesic properties in patients with breast cancer undergoing quadrantectomy with paravertebral block (PVB) induced with a solution of either one or two local anesthetics. METHOD A prospective, single-center, randomized, double-blinded, controlled trial was conducted from June 2014 until September 2015. A total of 85 women with breast cancer were assigned to receive PVB with either 0.5% levobupivacaine (n=42) or 0.5% levobupivacaine with 2% lidocaine (n=43). Hemodynamic variables of interest included intraoperative stroke volume variation (SVV), mean arterial pressure, heart rate, cardiac output, episodes of hypotension, use of crystalloids, and use of inotropes. Analgesic variables of interest were time to block onset, duration of analgesia, and postoperative serial pain assessment using a visual analogue scale. RESULTS Although the use of 0.5% levobupivacaine with 2% lidocaine solution for PVB decreased the mean time-to-block onset (14 minutes; P<0.001), it also caused significantly higher SVV values over the 60 minutes of monitoring (mean difference: 4.33; P<0.001). Furthermore, the patients who received 0.5% levobupivacaine with 2% lidocaine experienced shorter mean duration of analgesia (105 minutes; P=0.006) and more episodes of hypotension (17.5%; P=0.048) and received more intraoperative crystalloids (mean volume: 550 mL; P<0.001). CONCLUSION The use of 0.5% levobupivacaine in comparison with 0.5% levobupivacaine with 2% lidocaine solution for PVB had a longer time-to-block onset, but it also reduced hemodynamic disturbances and prolonged the analgesic effect.
The changing face of Soviet psychology
For better comprehension of the current situation in Soviet psychology one may want ta take a quick look at its historical past. A study of human behavior was traditionally carried out within Russia in two different settings: in somewhat more philosophically oriented depart ments and institutes of psychology, and in the psycho-physiological laboratories affiliated with departments of biology or medicine. The historically significant embodiment of the former tradition is the Moscow Institute of Psychology which was founded in 1912 by Wundt's student Georgy Chelpanov. The second trend has its origin in the laboratories of the famous Russian reflexologists, Ivan Sechenov and Ivan Pavlov and continues to flourish in the Department of Higher Nervous Activity of Moscow University and at the Institute of Higher Nervous Activity of the Academy of Sciences. After the brief period of hectic activity in the 1920's, when psychol ogists worked on the creation of a "new Soviet man", psychological studies suffered a major setback under Stalin and got relatively low priority in the Soviet academic hierarchy. Graduate training was avail able only in Moscow, Leningrad and Tbilisi; the psychology faculty was integrated into either departments of philosophy or schools of educa tion; existent research facilities were used almost exclusively for studies focusing on child welfare and educational psychology. Psychology as a mental health profession was non-existent for all practical purposes. Finally, an aggressive attack of some Pavlovian reflexologists on "men talistic" psychologists launched in the late 1940's further weakened psychology's position. The true revival came only in the mid-1960's. Independent depart ments of psychology were established at the Universities of Moscow and Leningrad. In 1971 a new research-oriented Institute of Psychology of the prestigious Academy of Sciences was opened in Moscow. The importance of psychology started to be recognized not only in such traditional areas as education and child welfare, but also by the air
Potassium Ions are More Effective than Sodium Ions in Salt Induced Peptide Formation
Prebiotic peptide formation under aqueous conditions in the presence of metal ions is one of the plausible triggers of the emergence of life. The salt-induced peptide formation reaction has been suggested as being prebiotically relevant and was examined for the formation of peptides in NaCl solutions. In previous work we have argued that the first protocell could have emerged in KCl solution. Using HPLC-MS/MS analysis, we found that K+ is more than an order of magnitude more effective in the L-glutamic acid oligomerization with 1,1'-carbonyldiimidazole in aqueous solutions than the same concentration of Na+, which is consistent with the diffusion theory calculations. We anticipate that prebiotic peptides could have formed with K+ as the driving force, not Na+, as commonly believed.
The Sacred Remains: American Attitudes Toward Death, 1799-1883
This fascinating book explores the changing attitudes toward death and the dead in northern Protestant communities during the nineteenth century. Gary Laderman offers insights into the construction of an "American way of death," illuminating the central role of the Civil War and tracing the birth of the funeral industry in the decades following the war. "Laderman's work is indispensable for understanding the impact of the Civil War on ideas of death-a subject practically ignored in previous studies of death in the United States. Using photographs, diaries, medical histories, art, and literature, he has produced an indispensable work for understanding the nineteenth-century nation."-Phillip Shaw Paludan, Journal of American History "A persuasive and highly readable discussion of how northern Protestants managed death from the early nineteenth century through the Civil War. An excellent book on an important topic, it marks a new high point in the study of death in American history."-Bruce Baird, H-SHEAR Book Review "A compelling portrait of the dramatic changes in the ways that Americans managed death from the late eighteenth century to the Civil War. An excellent, exciting book." -Jon Butler, Yale University "This is an invaluable work for the family historian to understand the roots of the unique American view on death and the funeral industry that still continues to puzzle, if not horrify, most of the western world!"-National Genealogical Society News Magazine
Advertising , Learning , and Consumer Choice in Experience Good Markets
This paper empirically analyzes di erent e ects of advertising in a nondurable, experience good market. A dynamic learning model of consumer behavior is presented in which we allow both \informative" e ects of advertising and \prestige" or \image" e ects of advertising. This learning model is estimated using consumer level panel data tracking grocery purchases and advertising exposures over time. Empirical results suggest that in this data, advertising's primary e ect was that of informing consumers. The estimates are used to quantify the value of this information to consumers and evaluate welfare implications of an alternative advertising regulatory regime. JEL Classi cations: D12, M37, D83 ' Economics Dept., Boston University, Boston, MA 02115 ([email protected]). This paper is a revised version of the second and third chapters of my doctoral dissertation at Yale University. Many thanks to my advisors: Steve Berry and Ariel Pakes, as well as Lanier Benkard, Russell Cooper, Gautam Gowrisankaran, Sam Kortum, Mike Riordan, John Rust, Roni Shachar, and many seminar participants, including most recently those at the NBER 1997Winter IO meetings, for advice and comments. I thank the Yale School of Management for gratefully providing the data used in this study. Financial support from the Cowles Foundation in the form of the Arvid Anderson Dissertation Fellowship is acknowledged and appreciated. All remaining errors in this paper are my own.
CRF-Filters: Discriminative Particle Filters for Sequential State Estimation
Particle filters have been applied with great success to various state estimation problems in robotics. However, particle filters often require extensive parameter tweaking in order to work well in practice. This is based on two observations. First, particle filters typically rely on independence assumptions such as "the beams in a laser scan are independent given the robot's location in a map". Second, even when the noise parameters of the dynamical system are perfectly known, the sample-based approximation can result in poor filter performance. In this paper we introduce CRF-filters, a novel variant of particle filtering for sequential state estimation. CRF-filters are based on conditional random fields, which are discriminative models that can handle arbitrary dependencies between observations. We show how to learn the parameters of CRF-filters based on labeled training data. Experiments using a robot equipped with a laser range-finder demonstrate that our technique is able to learn parameters of the robot's motion and sensor models that result in good localization performance, without the need of additional parameter tweaking.
Bacteremia is an independent risk factor for mortality in nosocomial pneumonia: a prospective and observational multicenter study
INTRODUCTION Since positive blood cultures are uncommon in patients with nosocomial pneumonia (NP), the responsible pathogens are usually isolated from respiratory samples. Studies on bacteremia associated with hospital-acquired pneumonia (HAP) have reported fatality rates of up to 50%. The purpose of the study is to compare risk factors, pathogens and outcomes between bacteremic nosocomial pneumonia (B-NP) and nonbacteremic nosocomial pneumonia (NB-NP) episodes. METHODS This is a prospective, observational and multicenter study (27 intensive care units in nine European countries). Consecutive patients requiring invasive mechanical ventilation for an admission diagnosis of pneumonia or on mechanical ventilation for > 48 hours irrespective of admission diagnosis were recruited. RESULTS A total of 2,436 patients were evaluated; 689 intubated patients presented with NP, 224 of them developed HAP and 465 developed ventilation-acquired pneumonia. Blood samples were extracted in 479 (69.5%) patients, 70 (14.6%) being positive. B-NP patients had higher Simplified Acute Physiology Score (SAPS) II score (51.5 ± 19.8 vs. 46.6 ± 17.5, P = 0.03) and were more frequently medical patients (77.1% vs. 60.4%, P = 0.01). Mortality in the intensive care unit was higher in B-NP patients compared with NB-NP patients (57.1% vs. 33%, P < 0.001). B-NP patients had a more prolonged mean intensive care unit length of stay after pneumonia onset than NB-NP patients (28.5 ± 30.6 vs. 20.5 ± 17.1 days, P = 0.03). Logistic regression analysis confirmed that medical patients (odds ratio (OR) = 5.72, 95% confidence interval (CI) = 1.93 to 16.99, P = 0.002), methicillin-resistant Staphylococcus aureus (MRSA) etiology (OR = 3.42, 95% CI = 1.57 to 5.81, P = 0.01), Acinetobacter baumannii etiology (OR = 4.78, 95% CI = 2.46 to 9.29, P < 0.001) and days of mechanical ventilation (OR = 1.02, 95% CI = 1.01 to 1.03, P < 0.001) were independently associated with B-NP episodes. Bacteremia (OR = 2.01, 95% CI = 1.22 to 3.55, P = 0.008), diagnostic category (medical patients (OR = 3.71, 95% CI = 2.01 to 6.95, P = 0.02) and surgical patients (OR = 2.32, 95% CI = 1.10 to 4.97, P = 0.03)) and higher SAPS II score (OR = 1.02, 95% CI = 1.01 to 1.03, P = 0.008) were independent risk factors for mortality. CONCLUSIONS B-NP episodes are more frequent in patients with medical admission, MRSA and A. baumannii etiology and prolonged mechanical ventilation, and are independently associated with higher mortality rates.
A root submergence technique for pontic site development in fixed dental prostheses in the maxillary anterior esthetic zone
PURPOSE This case report discusses the effect of a root submergence technique on preserving the periodontal tissue at the pontic site of fixed dental prostheses in the maxillary anterior aesthetic zone. METHODS Teeth with less than ideal structural support for fixed retainer abutments were decoronated at the crestal bone level. After soft tissue closure, the final fixed dental prostheses were placed with the pontics over the submerged root area. Radiographic and clinical observations at the pontic sites were documented. RESULTS The submerged roots at the pontic sites preserved the surrounding periodontium without any periapical pathology. The gingival contour at the pontic site was maintained in harmony with those of the adjacent teeth, as well as the overall form of the arch. CONCLUSIONS The results of this clinical report indicate that a root submergence technique can be successfully applied in pontic site development with fixed dental prostheses, especially in the maxillary anterior esthetic zone.
Ultrasound-Guided Genicular Nerve Block for Knee Osteoarthritis: A Double-Blind, Randomized Controlled Trial of Local Anesthetic Alone or in Combination with Corticosteroid.
BACKGROUND Recently, several studies suggested that radiofrequency (RF) ablation of the genicular nerves is a safe and effective therapeutic procedure for intractable pain associated with chronic knee osteoarthritis (OA). Diagnostic genicular nerve block (GNB) with local anesthetic has been generally conducted before making decisions regarding RF ablation. Although GNB has been recently performed together with corticosteroid, the analgesic effects of corticosteroids for treating chronic pain remain controversial. OBJECTIVES The current study aims to assess the effects of combining corticosteroids and local anesthesia during ultrasound-guided GNB in patients with chronic knee OA. STUDY DESIGN A randomized, double-blinded institutional study. SETTING This study took place at Asan Medical Center in Seoul, Korea. METHODS Forty-eight patients with chronic knee OA were randomly assigned to either the lidocaine alone group (n = 24) or lidocaine plus triamcinolone (TA) group (n = 24) before ultrasound-guided GNB. Visual analog scale (VAS), Oxford Knee Score (OKS), and global perceived effects (7-point scale) were assessed at baseline and at 1, 2, 4, and 8 weeks after the procedure. RESULTS The VAS scores were significantly lower in the lidocaine plus TA group than in the lidocaine alone group at both 2 (P < 0.001) and 4 (P < 0.001) weeks after GNB. The alleviation of intense pain in the lidocaine plus TA group was sustained up to 2 weeks after the procedure, in accordance with the definition of a minimal clinically important improvement. Although a similar intergroup difference in OKSs was observed at 4 weeks (P < 0.001), the clinical improvement in functional capacity lasted for only one week after the reassessment of OKSs, in accordance with a minimal important change. No patient reported any postprocedural adverse events during the follow-up period. LIMITATIONS The emotional state of the patients, which might affect the perception of knee pain, was not evaluated. The follow-up period was 2 months; this period might be insufficient to validate the short-term effects of GNB. CONCLUSIONS Ultrasound-guided GNB, when combined with a local anesthetic and corticosteroid, can provide short-term pain relief. However, the clinical benefit of corticosteroid administration was not clear in comparison with local anesthesia alone. Given the potential adverse effects, corticosteroids might not be appropriate as adjuvants during a GNB for chronic knee OA.The study protocol was approved by our institutional review board (2012-0210), and written informed consent was obtained from all patients. The trial was registered with the Clinical Research Information Service (KCT 0001139). KEY WORDS Chronic pain, knee osteoarthritis, genicular nerve block, ultrasound, corticosteroid, local anesthetic, visual analog scale, Oxford Knee Score.
Failure to respond autonomically to anticipated future outcomes following damage to prefrontal cortex.
Following damage to specific sectors of the prefrontal cortex, humans develop a defect in real-life decision making, in spite of otherwise normal intellectual performance. The patients so affected may even realize the consequences of their actions but fail to act accordingly, thus appearing oblivious to the future. The neural basis of this defect has resisted explanation. Here we identify a physiological correlate for the defect and discuss its possible significance. We measured the skin conductance responses (SCRs) of 7 patients with prefrontal damage, and 12 normal controls, during the performance of a novel task, a card game that simulates real-life decision making in the way it factors uncertainty, rewards, and penalties. Both patients and controls generated SCRs after selecting cards that were followed by penalties or by reward. However, after a number of trials, controls also began to generate SCRs prior to their selection of a card, while they pondered from which deck to choose, but no patients showed such anticipatory SCRs. The absence of anticipatory SCRs in patients with prefrontal damage is a correlate of their insensitivity to future outcomes. It is compatible with the idea that these patients fail to activate biasing signals that would serve as value markers in the distinction between choices with good or bad future outcomes; that these signals also participate in the enhancement of attention and working memory relative to representations pertinent to the decision process; and that the signals hail from the bioregulatory machinery that sustains somatic homeostasis and can be expressed in emotion and feeling.
Prevalence of kidney stones in the United States.
BACKGROUND The last nationally representative assessment of kidney stone prevalence in the United States occurred in 1994. After a 13-yr hiatus, the National Health and Nutrition Examination Survey (NHANES) reinitiated data collection regarding kidney stone history. OBJECTIVE Describe the current prevalence of stone disease in the United States, and identify factors associated with a history of kidney stones. DESIGN, SETTING, AND PARTICIPANTS A cross-sectional analysis of responses to the 2007-2010 NHANES (n=12 110). OUTCOME MEASUREMENTS AND STATISTICAL ANALYSIS Self-reported history of kidney stones. Percent prevalence was calculated and multivariable models were used to identify factors associated with a history of kidney stones. RESULTS AND LIMITATIONS The prevalence of kidney stones was 8.8% (95% confidence interval [CI], 8.1-9.5). Among men, the prevalence of stones was 10.6% (95% CI, 9.4-11.9), compared with 7.1% (95% CI, 6.4-7.8) among women. Kidney stones were more common among obese than normal-weight individuals (11.2% [95% CI, 10.0-12.3] compared with 6.1% [95% CI, 4.8-7.4], respectively; p<0.001). Black, non-Hispanic and Hispanic individuals were less likely to report a history of stone disease than were white, non-Hispanic individuals (black, non-Hispanic: odds ratio [OR]: 0.37 [95% CI, 0.28-0.49], p<0.001; Hispanic: OR: 0.60 [95% CI, 0.49-0.73], p<0.001). Obesity and diabetes were strongly associated with a history of kidney stones in multivariable models. The cross-sectional survey design limits causal inference regarding potential risk factors for kidney stones. CONCLUSIONS Kidney stones affect approximately 1 in 11 people in the United States. These data represent a marked increase in stone disease compared with the NHANES III cohort, particularly in black, non-Hispanic and Hispanic individuals. Diet and lifestyle factors likely play an important role in the changing epidemiology of kidney stones.
Impostor Networks for Fast Fine-Grained Recognition
In this work we introduce impostor networks, an architecture that allows to perform fine-grained recognition with high accuracy and using a light-weight convolutional network, making it particularly suitable for fine-grained applications on low-power and non-GPU enabled platforms. Impostor networks compensate for the lightness of its “backend” network by combining it with a lightweight non-parametric classifier. The combination of a convolutional network and such nonparametric classifier is trained in an end-to-end fashion. Similarly to convolutional neural networks, impostor networks can fit large-scale training datasets very well, while also being able to generalize to new data points. At the same time, the bulk of computations within impostor networks happen through nearest neighbor search in high-dimensions. Such search can be performed efficiently on a variety of architectures including standard CPUs, where deep convolutional networks are inefficient. In a series of experiments with three fine-grained datasets, we show that impostor networks are able to boost the classification accuracy of a moderate-sized convolutional network considerably at a very small computational cost.
Real-Time Human Pose Tracking from Range Data
Tracking human pose in real-time is a difficult problem with many interesting applications. Existing solutions suffer from a variety of problems, especially when confronted with unusual human poses. In this paper, we derive an algorithm for tracking human pose in real-time from depth sequences based on MAP inference in a probabilistic temporal model. The key idea is to extend the iterative closest points (ICP) objective by modeling the constraint that the observed subject cannot enter free space, the area of space in front of the true range measurements. Our primary contribution is an extension to the articulated ICP algorithm that can efficiently enforce this constraint. Our experiments show that including this term improves tracking accuracy significantly. The resulting filter runs at 125 frames per second using a single desktop CPU core. We provide extensive experimental results on challenging real-world data, which show that the algorithm outperforms the previous state-of-the-art trackers both in computational efficiency and accuracy.
Nonconscious processes and health.
OBJECTIVES Health behavior theories focus on the role of conscious, reflective factors (e.g., behavioral intentions, risk perceptions) in predicting and changing behavior. Dual-process models, on the other hand, propose that health actions are guided not only by a conscious, reflective, rule-based system but also by a nonconscious, impulsive, associative system. This article argues that research on health decisions, actions, and outcomes will be enriched by greater consideration of nonconscious processes. METHODS A narrative review is presented that delineates research on implicit cognition, implicit affect, and implicit motivation. In each case, we describe the key ideas, how they have been taken up in health psychology, and the possibilities for behavior change interventions, before outlining directions that might profitably be taken in future research. RESULTS Correlational research on implicit cognitive and affective processes (attentional bias and implicit attitudes) has recently been supplemented by intervention studies using implementation intentions and practice-based training that show promising effects. Studies of implicit motivation (health goal priming) have also observed encouraging findings. There is considerable scope for further investigations of implicit affect control, unconscious thought, and the automatization of striving for health goals. CONCLUSION Research on nonconscious processes holds significant potential that can and should be developed by health psychologists. Consideration of impulsive as well as reflective processes will engender new targets for intervention and should ultimately enhance the effectiveness of behavior change efforts.
CommunitySourcing: engaging local crowds to perform expert work via physical kiosks
Online labor markets, such as Amazon's Mechanical Turk, have been used to crowdsource simple, short tasks like image labeling and transcription. However, expert knowledge is often lacking in such markets, making it impossible to complete certain classes of tasks. In this work we introduce an alternative mechanism for crowdsourcing tasks that require specialized knowledge or skill: communitysourcing --- the use of physical kiosks to elicit work from specific populations. We investigate the potential of communitysourcing by designing, implementing and evaluating Umati: the communitysourcing vending machine. Umati allows users to earn credits by performing tasks using a touchscreen attached to the machine. Physical rewards (in this case, snacks) are dispensed through traditional vending mechanics. We evaluated whether communitysourcing can accomplish expert work by using Umati to grade Computer Science exams. We placed Umati in a university Computer Science building, targeting students with grading tasks for snacks. Over one week, 328 unique users (302 of whom were students) completed 7771 tasks (7240 by students). 80% of users had never participated in a crowdsourcing market before. We found that Umati was able to grade exams with 2% higher accuracy (at the same price) or at 33% lower cost (at equivalent accuracy) than traditional single-expert grading. Mechanical Turk workers had no success grading the same exams. These results indicate that communitysourcing can successfully elicit high-quality expert work from specific communities.
The combination of four molecular markers improves thyroid cancer cytologic diagnosis and patient management
Papillary thyroid cancer is the most common endocrine malignancy. The most sensitive and specific diagnostic tool for thyroid nodule diagnosis is fine-needle aspiration (FNA) biopsy with cytological evaluation. Nevertheless, FNA biopsy is not always decisive leading to “indeterminate” or “suspicious” diagnoses in 10 %–30 % of cases. BRAF V600E detection is currently used as molecular test to improve the diagnosis of thyroid nodules, yet it lacks sensitivity. The aim of the present study was to identify novel molecular markers/computational models to improve the discrimination between benign and malignant thyroid lesions. We collected 118 pre-operative thyroid FNA samples. All 118 FNA samples were characterized for the presence of the BRAF V600E mutation (exon15) by pyrosequencing and further assessed for mRNA expression of four genes (KIT, TC1, miR-222, miR-146b) by quantitative polymerase chain reaction. Computational models (Bayesian Neural Network Classifier, discriminant analysis) were built, and their ability to discriminate benign and malignant tumors were tested. Receiver operating characteristic (ROC) analysis was performed and principal component analysis was used for visualization purposes. In total, 36/70 malignant samples carried the V600E mutation, while all 48 benign samples were wild type for BRAF exon15. The Bayesian neural network (BNN) and discriminant analysis, including the mRNA expression of the four genes (KIT, TC1, miR-222, miR-146b) showed a very strong predictive value (94.12 % and 92.16 %, respectively) in discriminating malignant from benign patients. The discriminant analysis showed a correct classification of 100 % of the samples in the malignant group, and 95 % by BNN. KIT and miR-146b showed the highest diagnostic accuracy of the ROC curve, with area under the curve values of 0.973 for KIT and 0.931 for miR-146b. The four genes model proposed in this study proved to be highly discriminative of the malignant status compared with BRAF assessment alone. Its implementation in clinical practice can help in identifying malignant/benign nodules that would otherwise remain suspicious.
Maternal smoking during pregnancy and child behaviour problems: the Generation R Study.
BACKGROUND Several studies showed that maternal smoking in pregnancy is related to behavioural and emotional disorders in the offspring. It is unclear whether this is a causal association, or can be explained by other smoking-related vulnerability factors for child behavioural problems. METHODS Within a population-based birth cohort, both mothers and fathers reported on their smoking habits at several time-points during pregnancy. Behavioural problems were measured with the Child Behavior Checklist in 4680 children at the age of 18 months. RESULTS With adjustment for age and gender only, children of mothers who continued smoking during pregnancy had higher risk of Total Problems [odds ratio (OR) 1.59, 95% confidence interval (CI): 1.21-2.08] and Externalizing problems (OR 1.45, 95% CI: 1.15-1.84), compared with children of mothers who never smoked. Smoking by father when mother did not smoke, was also related to a higher risk of behavioural problems. The statistical association of parental smoking with behavioural problems was strongly confounded by parental characteristics, chiefly socioeconomic status and parental psychopathology; adjustment for these factors accounted entirely for the effect of both maternal and paternal smoking on child behavioural problems. CONCLUSIONS Maternal smoking during pregnancy, as well as paternal smoking, occurs in the context of other factors that place the child at increased developmental risk, but may not be causally related to the child's behaviour. It is essential to include sufficient information on parental psychiatric symptoms in studies exploring the association between pre-natal cigarette smoke exposure and behavioural disorders.
Clinical narrative classification using discriminant word embeddings with ELM
Clinical texts are inherently complex due to the medical domain expertise required for content comprehension. In addition, the unstructured nature of these narratives poses a challenge for automatically extracting information. In natural language processing, the use of word embeddings are an effective approach to generate word representations (vectors) in a low dimensional space. In this paper we use a log-linear model (a type of neural language model) and Linear Discriminant Analysis with a kernel-based Extreme Learning Machine (ELM) to map the clinical texts to the medical code. Experimental results on clinical texts indicate improvement with ELM in comparison to SVM and neural network approaches.
qp : A Tool for Generating 3 D Models of Ancient Greek Pottery
The development of content based retrieval mechanisms is a very active research area. Present studies are mainly focused on automating the information extraction and indexing processes. Usually for the development and evaluation of such mechanisms there is always a need for a ground-truth database. In this paper we present a software tool named qp that is able to semi-automatically produce a collection of random 3D vessels, with morphological characteristics similar to those found in ancient Greek pottery, a ceramic group exhibited worldwide with great impact to scholars as well as general public. A 3D vessel collection has been produced by qp and can be used as a test bed dataset for the development of shape-based 3D descriptors applicable to pottery. Additionally, qp can be considered as a 3D vessel modelling software tool which can be used by people not related to computer graphics technology and particularly to 3D modelling.
Clinical application of therapeutic plasma exchange in the Three Gorges Area.
OBJECTIVE To analyze the clinical effect of therapeutic plasma exchange (TPE) on 43 patients in the Three Gorges Area. METHODS Plasma was collected by machine and combined with low-molecular-weight dextran and albumin for use as a replacement fluid for TPE treatment of 43 patients suffering from various blood disorders, diseases of the nervous system, ABO incompatible allogeneic hematopoietic stem cell transplantation and kidney disease. RESULTS The volume of a single TPE was 1.6-2.0l, performed on average 2.3 times/case, and effective in 88.4% (38/43) of cases. CONCLUSION TPE through the plasma collection machine is a well tolerated, economic and effective treatment.
Kinect depth restoration via energy minimization with TV21 regularization
Depth maps generated by Kinect cameras often contain a significant amount of missing pixels and strong noise, limiting their usability in many computer vision applications. We present a new energy minimization method to fill the missing regions and remove noise in a depth map, by exploiting the strong correlation between color and depth values in local image neighborhoods. To preserve sharp edges and remove noise from the depth map, we propose to add a TV21 regularization term into the energy function. Finally, we show how to effectively minimize the total energy using an alternating optimization approach. Experimental results show that the proposed method outperforms commonly-used depth inpainting approaches.
Is There a Link Between Executive Compensation and Accounting Fraud ?
This study investigates the association between the structure of executive compensation and accounting fraud. We study 50 firms accused of accounting fraud by the Securities and Exchange Commission (SEC) during the period 1996-2003 as compared to firms not accused of accounting fraud during the same period. We find that the probability of accounting fraud is increasing in the percent of total executive compensation that is stock-based (termed stock-based mix) after controlling for governance characteristics, financial performance, financial distress, firm size, and the likelihood of management wanting to obtain external financing. We find that while the unconditional likelihood of accounting fraud is small, a one standard deviation increase in the proportion of compensation that is stock-based increases the probability of an accounting fraud by approximately 68%. For managers to undertake fraud they must perceive positive benefits from it. We examine the extent to which managerial wealth was overstated prior to the alleged fraud by measuring the decline in managerial wealth once the alleged fraud was made public. We find that the value of managerial stock holdings in firms accused of fraud declined by 49% at the median over the six months following the accusation of fraud. We do not conclude from this evidence that stock-based compensation is inefficient. Rather, the evidence suggests that compensation committees face a trade-off between the positive incentive effects afford by stock-based compensation and the negative effect of increasing the probability of accounting fraud. JEL classification: G30, G32, J33, M41
Portfolio Choice and the Bayesian Kelly Criterion
We derive optimal gambling and investment policies for cases in which the underlying stochastic process has parameter values that are unobserved random variables. For the objective of maximizing logarithmic utility when the underlying stochastic process is a simple random walk in a random environment, we show that a state-dependent control is optimal, which is a generalization of the celebrated Kelly strategy: The optimal strategy is to bet a fraction of current wealth equal to a linear function of the posterior mean increment. To approximate more general stochastic processes, we consider a continuous-time analog involving Brownian motion. To analyze the continuous-time problem, we study the diffusion limit of random walks in a random environment. We prove that they converge weakly to a Kiefer process, or tied-down Brownian sheet. We then find conditions under which the discrete-time process converges to a diffusion, and analyze the resulting process. We analyze in detail the case of the natural conjugate prior, where the success probability has a beta distribution, and show that the resulting limiting diffusion can be viewed as a rescaled Brownian motion. These results allow explicit computation of the optimal control policies for the continuoustime gambling and investment problems without resorting to continuous-time stochastic-control procedures. Moreover they also allow an explicit quantitative evaluation of the financial value of randomness, the financial gain of perfect information and the financial cost of learning in the Bayesian problem.
The Influence of Metabolic Factors for Nonalcoholic Fatty Liver Disease in Women
BACKGROUND/AIMS Women after menopause have increased insulin resistance and visceral fat, which may increase the prevalence of nonalcoholic fatty liver disease (NAFLD). However, the pathogenesis of NAFLD in women has not been clearly defined. In this study, we aimed to determine the risk factors for NAFLD in women. METHODS A retrospective cohort study was conducted. Women who underwent abdominal ultrasonography and blood sampling for routine health check-ups were recruited. RESULTS Among 1,423 subjects, 695 women (48.9%) were in a menopausal state. The prevalence of NAFLD was higher in postmenopausal women than in premenopausal women (27.2% versus 14.4%, P < 0.001). In premenopausal women, low HDL-cholesterol, central obesity, and homeostasis model assessment-estimated insulin resistance showed a significant association with the increased risk of NAFLD in multivariate analysis. In postmenopausal women, the presence of diabetes, triglyceridemia, and central obesity showed a significant association with the risk of NAFLD. The presence of menopause and hormone replacement therapy in postmenopausal women were not risk factors for NAFLD. CONCLUSIONS Our findings showed different metabolic factors for NAFLD in pre- and postmenopausal women. However, the key issues are the same: central obesity and insulin resistance. These results reemphasize the importance of metabolic factors irrespective of menopausal status in the pathogenesis of NAFLD in women.
Disentangled Representations in Neural Models
Representation learning is the foundation for the recent success of neural network models. However, the distributed representations generated by neural networks are far from ideal. Due to their highly entangled nature, they are difficult to reuse and interpret, and they do a poor job of capturing the sparsity which is present in realworld transformations. In this paper, I describe methods for learning disentangled representations in the two domains of graphics and computation. These methods allow neural methods to learn representations which are easy to interpret and reuse, yet they incur little or no penalty to performance. In the Graphics section, I demonstrate the ability of these methods to infer the generating parameters of images and rerender those images under novel conditions. In the Computation section, I describe a model which is able to factorize a multitask learning problem into subtasks and which experiences no catastrophic forgetting. Together these techniques provide the tools to design a wide range of models that learn disentangled representations and better model the factors of variation in the real world. Thesis Supervisor: Joshua B. Tenenbaum Title: Professor
Modeling and predictive capacity adjustment for job shop systems with RMTs
Demand fluctuations along with the requirements of cost-effectiveness, high customization, high product quality lead to a significant increase in manufacturing dynamics and complexity. These fluctuations disturb the work plan and lead to performance deterioration. Capacity adjustment is one of the major approaches to react on the fluctuations in demand. In this paper, we focus on improving the potential of current processes to react flexibly on these fluctuations on short term level by means of reconfigurable machine tools (RMTs). Furthermore, we propose a closed-loop feedback control system based on model predictive control (MPC) to align uncertainties and dynamics in terms of logistic objectives. The proposed method provides a new solution for manufacturers to optimize resources and improve the production performance in a dynamically changing environment. We demonstrate the effectiveness of the proposed method by simulation of a three-product four-workstation job shop system.
A Novel, Gradient Boosting Framework for Sentiment Analysis in Languages where NLP Resources Are Not Plentiful: A Case Study for Modern Greek
Sentiment analysis has played a primary role in text classification. It is an undoubted fact that some years ago, textual information was spreading in manageable rates; however, nowadays, such information has overcome even the most ambiguous expectations and constantly grows within seconds. It is therefore quite complex to cope with the vast amount of textual data particularly if we also take the incremental production speed into account. Social media, e-commerce, news articles, comments and opinions are broadcasted on a daily basis. A rational solution, in order to handle the abundance of data, would be to build automated information processing systems, for analyzing and extracting meaningful patterns from text. The present paper focuses on sentiment analysis applied in Greek texts. Thus far, there is no wide availability of natural language processing tools for Modern Greek. Hence, a thorough analysis of Greek, from the lexical to the syntactical level, is difficult to perform. This paper attempts a different approach, based on the proven capabilities of gradient boosting, a well-known technique for dealing with high-dimensional data. The main rationale is that since English has dominated the area of preprocessing tools and there are also quite reliable translation services, we could exploit them to transform Greek tokens into English, thus assuring the precision of the translation, since the translation of large texts is not always reliable and meaningful. The new feature set of English tokens is augmented with the original set of Greek, consequently producing a high dimensional dataset that poses certain difficulties for any traditional classifier. Accordingly, we apply gradient boosting machines, an ensemble algorithm that can learn with different loss functions providing the ability to work efficiently with high dimensional data. Moreover, for the task at hand, we deal with a class imbalance issues since the distribution of sentiments in real-world applications often displays issues of inequality. For example, in political forums or electronic discussions about immigration or religion, negative comments overwhelm the positive ones. The class imbalance problem was confronted using a hybrid technique that performs a variation of under-sampling the majority class and over-sampling the minority class, respectively. Experimental results, considering different settings, such as translation of tokens against translation of sentences, consideration of limited Greek text preprocessing and omission of the translation phase, demonstrated that the proposed gradient boosting framework can effectively cope with both high-dimensional and imbalanced datasets and performs significantly better than a plethora of traditional machine learning classification approaches in terms of precision and recall measures.
Pfp: parallel fp-growth for query recommendation
Frequent itemset mining (FIM) is a useful tool for discovering frequently co-occurrent items. Since its inception, a number of significant FIM algorithms have been developed to speed up mining performance. Unfortunately, when the dataset size is huge, both the memory use and computational cost can still be prohibitively expensive. In this work, we propose to parallelize the FP-Growth algorithm (we call our parallel algorithm PFP) on distributed machines. PFP partitions computation in such a way that each machine executes an independent group of mining tasks. Such partitioning eliminates computational dependencies between machines, and thereby communication between them. Through empirical study on a large dataset of 802,939 Web pages and 1,021,107 tags, we demonstrate that PFP can achieve virtually linear speedup. Besides scalability, the empirical study demonstrates that PFP to be promising for supporting query recommendation for search engines.
SUPPLE: automatically generating user interfaces
In order to give people ubiquitous access to software applications, device controllers, and Internet services, it will be necessary to automatically adapt user interfaces to the computational devices at hand (eg, cell phones, PDAs, touch panels, etc.). While previous researchers have proposed solutions to this problem, each has limitations. This paper proposes a novel solution based on treating interface adaptation as an optimization problem. When asked to render an interface on a specific device, our supple system searches for the rendition that meets the device's constraints and minimizes the estimated effort for the user's expected interface actions. We make several contributions: 1) precisely defining the interface rendition problem, 2) demonstrating how user traces can be used to customize interface rendering to particular user's usage pattern, 3) presenting an efficient interface rendering algorithm, 4) performing experiments that demonstrate the utility of our approach.
Marketing financial services to all levels of affluence
This study examines efforts being made by commercial banks to satisfy their obligations under the Community Reinvestment Act while at the same time responding to changes in their economic and competitive environments. Banks are being directly and indirectly mandated by outside forces to find ways to serve all segments of their markets. What one could consider the banks' choices or prerogatives, such as served markets, selection and pursuit of desired market niches, differentiation strategies, and positioning alternatives, are all being affected by outside regulatory forces. In an effort to identify the marketing-related factors that differentiate the two groups, this study compares the policies and characteristics of those institutions that are satisfying their regulatory obligations to those institutions that are not satisfying their obligations. © 1995 John Wiley & Sons, Inc.
Text sentiment analysis based on long short-term memory
With the rapid development of Internet and big explosion of text data, it has been a very significant research subject to extract valuable information from text ocean. To realize multi-classification for text sentiment, this paper promotes a RNN language model based on Long Short Term Memory (LSTM), which can get complete sequence information effectively. Compared with the traditional RNN language model, LSTM is better in analyzing emotion of long sentences. And as a language model, LSTM is applied to achieve multi-classification for text emotional attributes. So though training different emotion models, we can know which emotion the sentence belongs to by using these emotion models. And numerical experiments show that it can produce better accuracy rate and recall rate than the conventional RNN.
Effects of aerobic training intensity on resting, exercise and post-exercise blood pressure, heart rate and heart-rate variability
We aimed to investigate the effects of endurance training intensity (1) on systolic blood pressure (SBP) and heart rate (HR) at rest before exercise, and during and after a maximal exercise test; and (2) on measures of HR variability at rest before exercise and during recovery from the exercise test, in at least 55-year-old healthy sedentary men and women. A randomized crossover study comprising three 10-week periods was performed. In the first and third period, participants exercised at lower or higher intensity (33% or 66% of HR reserve) in random order, with a sedentary period in between. Training programmes were identical except for intensity, and were performed under supervision thrice for 1 h per week. The results show that in the three conditions, that is, at rest before exercise, during exercise and during recovery, we found endurance training at lower and higher intensity to reduce SBP significantly (P<0.05) and to a similar extent. Further, SBP during recovery was, on average, not lower than at rest before exercise, and chronic endurance training did not affect the response of SBP after an acute bout of exercise. The effect of training on HR at rest, during exercise and recovery was more pronounced (P<0.05) with higher intensity. Finally, endurance training had no significant effect on sympathovagal balance. In conclusion, in participants at higher age, both training programmes exert similar effects on SBP at rest, during exercise and during post-exercise recovery, whereas the effects on HR are more pronounced after higher intensity training.
Handbook of Constraint Programming
Syntax We distinguish between two different kinds of constraints: built-in (pre-defined) constraints which are solved by a built-in constraint solver, and CHR (user-defined) constraintswhich are defined by the rules in a CHR program. Built-in const rain s include syntactic equality=, true, andfalse. This distinction allows to embed and utilize existing constraint solvers as well as side-effect-free host langua ge statements. Built-in constraint solvers are considered as black-boxes in whose behavior is t rusted and that do not need to be modified or inspected. The solvers for the built-in constr ain s can be written in CHR itself, giving rise to a hierarchy of solvers [87]. A CHR programis a finite set of rules. There are three kinds of rules: Simplification rule: Name@ H ⇔ C B Propagation rule: Name@ H ⇒ C B Simpagation rule: Name@ H \H ′ ⇔ C B Nameis an optional, unique identifier of a rule, theadH , H ′ is a non-empty conjunction of CHR constraints, the guardC is a conjunction of built-in constraints, and the bodyB is a goal. Agoal is a conjunction of built-in and CHR constraints. A trivial g uard expression “ true” can be omitted from a rule. Simpagation rules abbreviate simplification rules of the fo rm H ∧H ′ ⇔ C H ∧B, so there is no further need to discuss them separately. Operational Semantics At runtime, a CHR program is provided with an initial state an d will be executed until either no more rules are applicable or a contradiction occur s. The operational semantics of CHR is given by a transition sys tem (Fig. 1.14). LetP be a CHR program. We define the transition relation 7→ by two computation steps (transitions), one for each kind of CHR rule. Statesare goals, i.e., conjunctions of built-in and CHR constraints. States are also called (constraint) stores . In the figure, all upper case letters are meta-variables that stand for conjunctions of cons trai ts. The constraint theory CT defines the semantics of the built-in constraints. Gbi denotes the built-in constraints of G. 2Integrating deduction and abduction, bottom-up and top-do wn execution, forward and backward chaining, tabulation and integrity constraints. 26 1. Constraints in Procedural and Concurrent Languages
Research Approaches to Mobile Use in the Developing World: A Review of the Literature
at Columbia University for support during the writing of the early draft of paper, and to numerous readers—particularly the three anonymous reviewers—for their suggestions. Opinions and analysis are the author's, and not necessarily those of Microsoft Corporation. Abstract The paper reviews roughly 200 recent studies of mobile (cellular) phone use in the developing world, and identifies major concentrations of research. It categorizes studies along two dimensions. One dimension distinguishes studies of the determinants of mobile adoption from those that assess the impacts of mobile use, and from those focused on the interrelationships between mobile technologies and users. A secondary dimension identifies a subset of studies with a strong economic development perspective. The discussion considers the implications of the resulting review and typology for future research.
Detection and prevention of SIP flooding attacks in voice over IP networks
As voice over IP (VoIP) increasingly gains popularity, traffic anomalies such as the SIP flooding attacks are also emerging and becoming into a major threat to the technology. Thus, detecting and preventing such anomalies is critical to ensure an effective VoIP system. The existing flooding detection schemes are inefficient in detecting low-rate flooding from dynamic background traffic, or may even totally fail when flooding is launched in a multi-attribute manner by simultaneously manipulating different types of SIP messages. In this paper, we develop an online scheme to detect and subsequently prevent the flooding attacks, by integrating a novel three-dimensional sketch design with the Hellinger distance (HD) detection technique. The sketch data structure summarizes the incoming SIP messages into a compact and constant-size data set based on which a separate probability distribution can be established for each SIP attribute. The HD monitors the evolution of the probability distributions and detects flooding attacks when abnormal variations are observed. The three-dimensional design equips our scheme with the advantages of high detection accuracy even for low-rate flooding, robust performance under multi-attribute flooding, and the capability of selectively discarding the offending SIP messages to prevent the attacks. Moreover, we develop an estimation freeze mechanism to protect the detection threshold from being polluted by attacks. Not only do we theoretically analyze the performance of the proposed detection and prevention techniques, but also resort to extensive simulations to thoroughly examine the performance.
CGMOS: Certainty Guided Minority OverSampling
Handling imbalanced datasets is a challenging problem that if not treated correctly results in reduced classification performance. Imbalanced datasets are commonly handled using minority oversampling, whereas the SMOTE algorithm is a successful oversampling algorithm with numerous extensions. SMOTE extensions do not have a theoretical guarantee during training to work better than SMOTE and in many instances their performance is data dependent. In this paper we propose a novel extension to the SMOTE algorithm with a theoretical guarantee for improved classification performance. The proposed approach considers the classification performance of both the majority and minority classes. In the proposed approach CGMOS (Certainty Guided Minority OverSampling) new data points are added by considering certainty changes in the dataset. The paper provides a proof that the proposed algorithm is guaranteed to work better than SMOTE for training data. Further, experimental results on 30 real-world datasets show that CGMOS works better than existing algorithms when using 6 different classifiers.
Delta-9-tetrahydrocannabinol may palliate altered chemosensory perception in cancer patients: results of a randomized, double-blind, placebo-controlled pilot trial.
BACKGROUND A pilot study (NCT00316563) to determine if delta-9-tetrahydrocannabinol (THC) can improve taste and smell (chemosensory) perception as well as appetite, caloric intake, and quality of life (QOL) for cancer patients with chemosensory alterations. PATIENTS AND METHODS Adult advanced cancer patients, with poor appetite and chemosensory alterations, were recruited from two sites and randomized in a double-blinded manner to receive either THC (2.5 mg, Marinol(®); Solvay Pharma Inc., n = 24) or placebo oral capsules (n = 22) twice daily for 18 days. Twenty-one patients completed the trial. At baseline and posttreatment, patients completed a panel of patient-reported outcomes: Taste and Smell Survey, 3-day food record, appetite and macronutrient preference assessments, QOL questionnaire, and an interview. RESULTS THC and placebo groups were comparable at baseline. Compared with placebo, THC-treated patients reported improved (P = 0.026) and enhanced (P < 0.001) chemosensory perception and food 'tasted better' (P = 0.04). Premeal appetite (P = 0.05) and proportion of calories consumed as protein increased compared with placebo (P = 0.008). THC-treated patients reported increased quality of sleep (P = 0.025) and relaxation (P = 0.045). QOL scores and total caloric intake were improved in both THC and placebo groups. CONCLUSIONS THC may be useful in the palliation of chemosensory alterations and to improve food enjoyment for cancer patients.
Efficacy and safety of memantine in patients with moderate-to-severe Alzheimer’s disease: results of a pooled analysis of two randomized, double-blind, placebo-controlled trials in Japan
BACKGROUND With the increase in the aging population, there is a pressing need to provide effective treatment options for individuals with Alzheimer's disease (AD). Memantine is an N-methyl-D-aspartate receptor antagonist used to treat AD in > 80 countries worldwide, and studies in the USA and Europe have shown it to be effective in improving language deficits; however, there are currently no data on language improvements in Japanese patients treated with memantine. OBJECTIVES To clarify the efficacy and safety of memantine in Japanese outpatients with moderate to severe AD, using a pooled analysis of two multicenter randomized placebo-controlled trials, a phase 2 dose-finding study and a phase 3 study. RESULTS The final analysis comprised 633 patients (318 receiving memantine and 315 placebo). Memantine produced better outcomes in terms of Severe Impairment Battery-Japanese version, Clinician's Interview-Based Impression of Change plus-Japanese version, Behavioral Pathology in AD Rating Scale, and language scores, versus placebo. The overall incidence of adverse events and adverse reactions was similar between groups. CONCLUSION In this pooled analysis of Japanese patients, memantine achieved better outcomes than placebo in terms of cognition, including attention, praxis, visuospatial ability and language, and behavioral and psychological symptoms, including activity disturbances and aggressiveness.
Trial of early nifedipine in acute myocardial infarction: the Trent study.
Over 30 months 9292 consecutive patients admitted to nine coronary care units with suspected myocardial infarction were considered for admission to a randomised double blind study comparing the effect on mortality of nifedipine 10 mg four times a day with that of placebo. Among the 4801 patients excluded from the study the overall one month fatality rate was 18.2% and the one month fatality rate in those with definite myocardial infarction 26.8%. A total of 4491 patients fulfilled the entry criteria and were randomly allocated to nifedipine or placebo immediately after assessment in the coronary care unit. Roughly 64% of patients in both treatment groups sustained an acute myocardial infarction. The overall one month fatality rates were 6.3% in the placebo treated group and 6.7% in the nifedipine treated group. Most of the deaths occurred in patients with an in hospital diagnosis of myocardial infarction, and their one month fatality rates were 9.3% for the placebo group and 10.2% for the nifedipine group. These differences were not statistically significant. Subgroup analysis also did not suggest any particular group of patients with suspected acute myocardial infarction who might benefit from early nifedipine treatment in the dose studied.
Double-Blind , Randomized Study Evaluating the Glycemic and Anti-in fl ammatory Effects of Subcutaneous LY 2189102 , a Neutralizing IL-1 b Antibody , in PatientsWith Type 2 Diabetes
Type 2 diabetes occurs when pancreatic b-cell function fails to compensate for insulin resistance (1,2). As the duration of diabetes increases, b-cell function progressively deteriorates, partly as a result of apoptotic cell death (3–5). Inflammation is associated with pancreatic b-cell apoptosis and reduced insulin sensitivity, supporting the notion that inflammation plays a key role in aggravating or even causing type 2 diabetes specifically or the metabolic syndrome generally (6). Interleukin (IL)-1b is an inflammatory mediator that may contribute to this pathophysiology. IL-1b expression has been observed in b-cells of patients with type 2 diabetes (7). Moreover, production and secretion of IL-1b from b-cells is induced by high glucose levels and inhibits the function and promotes the apoptosis of b-cells (7–10). The IL-1 receptor antagonist (IL-1ra) protects human b-cells from glucose-induced functional impairment (7) and apoptosis, and its expression is decreased in patients with type 2 diabetes (11). The hypothesis that blocking IL-1b activity could be therapeutic in type 2 diabetes was tested clinically with anakinra, a recombinant IL-1ra (12,13). Results from a proof-of-concept study indicated that anakinra modestly improved hemoglobin A1c (HbA1c) relative to placebo, reduced circulating inflammatory cytokines, and showed signs of improved b-cell secretory function after 13 weeks of daily subcutaneousdosing (13).Ninemonths after treatment completion, anakinratreated patients continued to have improved proinsulin/insulin ratios and reduced inflammatory cytokines; anakinra responders required less exogenous insulin than did nonresponders (14). Clinical evaluation of a neutralizing IL-1b monoclonal antibody (XOMA 052) in type 2 diabetic patients showed similar results. XOMA 052 improved HbA1c relative to placebo after a single intravenous infusion and after repeated subcutaneous dosing; improvements in fasting blood glucose and insulin sensitivity after subcutaneous dosing were also noted (15). Typically only a small percentage of cytokine receptors require engagement to activate downstream signaling pathways, and cytokines are typically labile proteins expressed at low concentrations. Because anakinra binds to the IL-1 receptor and has a short half-life (4–6 h) (16), it is unclear whether the modest nature of the response in type 2 diabetes was related to compound-specific properties or a reflection of the role of this cytokine pathway in the disease pathogenesis. Although XOMA 052 binds and neutralizes IL-1b directly and has a longer half-life, it was dosed in a limited number of subjects and for short duration. Further evaluation c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c
Adult norms for a commercially available Nine Hole Peg Test for finger dexterity.
The Nine Hole Peg Test is commonly used by occupational therapists as a simple, quick assessment for finger dexterity. The purpose of this study was to evaluate the interrater and test-retest reliability of the commercially available Smith & Nephew Rehabilitation Division version of the Nine Hole Peg Test, and to establish new adult norms for the Nine Hole Peg Test for finger dexterity utilizing this particular version. Two of the researchers established interrater and test-retest reliability by evaluating 25 occupational therapy student volunteers. Seven hundred and three subjects, ranging in age from 21 to 71+ years, were tested to establish norms, using the standard protocol. Results showed high interrater reliability and only moderate test-retest reliability. Scores obtained by using the commercially available version were not statistically different from previously published norms (Mathiowetz, Weber, Kashman, & Volland, 1985). This study supports the original norms and further assists occupational therapists to evaluate dexterity accurately.
HEALTH AND OCCUPATIONAL SAFETY FOR FEMALE WORKFORCE OF GARMENT INDUSTRIES IN BANGLADESH
It is the advantage to outsourcing of supply chains that opened up a new door of economic emancipation for Bangladesh. The readymade garments (RMG) of Bangladesh emerge as value chain member of European, USA cloth merchants and retails. Many of them are found economic justification in outsourcing production function to Bangladesh. RMG sector of Bangladesh responded to this outsourcing demand quite successfully. The overwhelming success of the RMG sector of Bangladesh has moved its status in the global context in terms of economic and social development indicators. Despite the challenge that lies ahead, Bangladesh performed well in terms of realizing benefit of economic globalization, particularly its RMG sector. This paper discusses in brief the problem of health and safety issues of female workforce of garment industries in Bangladesh based upon the industry environment, their residential environment, working condition, age, problem of health, causes of diseases, causes of fire accident and their medical facilities.
Transversus abdominis muscle release: a novel approach to posterior component separation during complex abdominal wall reconstruction.
BACKGROUND Several modifications of the classic retromuscular Stoppa technique to facilitate dissection beyond the lateral border of the rectus sheath recently were reported. We describe a novel technique of transversus abdominis muscle release (TAR) for posterior component separation during major abdominal wall reconstructions. METHODS Retrospective review of consecutive patients undergoing TAR. Briefly, the retromuscular space is developed laterally to the edge of the rectus sheath. The posterior rectus sheath is incised 0.5-1 cm underlying medial to the linea semilunaris to expose the medial edge of the transversus abdominis muscle. The muscle then is divided, allowing entrance to the space anterior to the transversalis fascia. The posterior rectus fascia then is advanced medially. The mesh is placed as a sublay and the linea alba is restored ventral to the mesh. RESULTS Between December 2006 and December 2009, we have used this technique successfully in 42 patients with massive ventral defects. Thirty-two (76.2%) patients had recurrent hernias. The average mesh size used was 1,201 ± 820 cm(2) (range, 600-2,700). Ten (23.8%) patients developed various wound complications requiring reoperation/debridement in 3 patients. At a median follow-up period of 26.1 months, there have been 2 (4.7%) recurrences. CONCLUSIONS Our novel technique for posterior component separation was associated with a low perioperative morbidity and a low recurrence rate. Overall, transversus abdominis muscle release may be an important addition to the armamentarium of surgeons undertaking major abdominal wall reconstructions.
On estimation of wind velocity, angle-of-attack and sideslip angle of small UAVs using standard sensors
It is proposed to estimate wind velocity, Angle-Of-Attack (AOA) and Sideslip Angle (SSA) of a fixed-wing Unmanned Aerial Vehicle (UAV) using only kinematic relationships with a Kalman Filter (KF), avoiding the need to know aerodynamic models or other aircraft parameters. Assuming that measurements of airspeed and attitude of an UAV are available as inputs, a linear 4th order time-varying model of the UAV's longitudinal speed and the 3-D wind velocity is used to design a Kalman-filter driven by a GNSS velocity measurement airspeed sensor. An observability analysis shows that the states can be estimated along with an airspeed sensor calibration factor provided that the flight maneuvers are persistently exciting, i.e. the aircraft changes attitude. The theoretical analysis of the KF shows that global exponential stability of the estimation error is achieved under these conditions. The method is tested using experimental data from three different UAVs, using their legacy autopilot to provide basic estimates of UAV velocity and attitude. The results show that convergent estimates are achieved with typical flight patterns indicating that excitation resulting from the environment and normal flight operation is sufficient. Wind velocity estimates correlate well with observed winds at the ground. The validation of AOA and SSA estimates is preliminary, but indicate some degree of correlation between the AOA estimate and vertical accelerometer measurements, as would be expected since lift force can be modeled as a linear function of AOA in normal flight.
Are there basic emotions?
Ortony and Turner's (1990) arguments against those who adopt the view that there are basic emotions are challenged. The evidence on universals in expression and in physiology strongly suggests that there is a biological basis to the emotions that have been studied. Ortony and Turner's reviews of this literature are faulted, and their alternative theoretical explanations do not fit the evidence. The utility of the basic emotions approach is also shown in terms of the research it has generated.
Mir-34: A New Weapon Against Cancer?
The microRNA(miRNA)-34a is a key regulator of tumor suppression. It controls the expression of a plethora of target proteins involved in cell cycle, differentiation and apoptosis, and antagonizes processes that are necessary for basic cancer cell viability as well as cancer stemness, metastasis, and chemoresistance. In this review, we focus on the molecular mechanisms of miR-34a-mediated tumor suppression, giving emphasis on the main miR-34a targets, as well as on the principal regulators involved in the modulation of this miRNA. Moreover, we shed light on the miR-34a role in modulating responsiveness to chemotherapy and on the phytonutrients-mediated regulation of miR-34a expression and activity in cancer cells. Given the broad anti-oncogenic activity of miR-34a, we also discuss the substantial benefits of a new therapeutic concept based on nanotechnology delivery of miRNA mimics. In fact, the replacement of oncosuppressor miRNAs provides an effective strategy against tumor heterogeneity and the selective RNA-based delivery systems seems to be an excellent platform for a safe and effective targeting of the tumor.
Big-Data in Climate Change Models — A Novel Approach with Hadoop MapReduce
The goal of this work is to present a software package which is able to process binary climate data through spawning Map-Reduce tasks while introducing minimum computational overhead and without modifying existing application code. The package is formed by the combination of two tools, Pipistrello, a Java utility that allows users to execute Map-Reduce tasks over any kind of binary file, Tina a lightweight Python library that building on top of Pipistrello is able to process scientific dataset, including NetCDF files. We benchmarked the combination of this two tools using a test Apache Hadoop Cluster (4 nodes) and a “relatively” small data set (200 GB), obtaining encouraging results. When using larger clusters and larger storage space, Tina and Pipistrello should be able to scale-up and analyse hundreds of Terabytes of scientific data in a faster, easier and efficient way.
To Graft or Not to Graft? Evidence-Based Guide to Decision Making in Oral Bone Graft Surgery
Rehabilitation of the incomplete dentition by means of osseointegrated implants represents a highly predictable and widespread therapy. Advantages of oral implant treatment over conventional non-surgical prosthetic rehabilitation involve avoidance of removable dentures and tooth structure conservation of the remaining dentition. Implant placement necessitates sufficient bone quantity as well as bone quality, that may be compromised following tooth loss or trauma. Sufficient alveolar bone to host implants of 10 mm in length and 3-4 mm in diameter has been traditionally regarded as minimum requirements to allow bone-demanded implant placement. Three-dimensional bone morphology, however, may not permit favourable implant positioning. In the age of prosthetic-driven implant treatment, bone grafting procedures may be indicated not exclusively due to lack of bone volume, but to ensure favourable biomechanics and long-term esthetic outcome. A vast variety of treatment modalities have been suggested to increase alveolar bone volume and thus overcome the intrinsic limitations of oral implantology. Although success rates of various bone graft techniques are high, inherent disadvantages of augmentation procedures include prolonged treatment times, raised treatment costs and increased surgical invasion associated with patient morbidity and potential complications. Therefore, treatment tactics to obviate bone graft surgery are naturally preferred by both patients and surgeons. Nongrafting options, such as implants reduced in length and diameter or the use of computerguided implant surgery, may on the other hand carry the risk of lower predictability and reduced long-term success. To graft or not to graft? – that is the question clinicians are facing day-to-day in oral implant rehabilitation.
CN-DBpedia: A Never-Ending Chinese Knowledge Extraction System
Great efforts have been dedicated to harvesting knowledge bases from online encyclopedias. These knowledge bases play important roles in enabling machines to understand texts. However, most current knowledge bases are in English and non-English knowledge bases, especially Chinese ones, are still very rare. Many previous systems that extract knowledge from online encyclopedias, although are applicable for building a Chinese knowledge base, still suffer from two challenges. The first is that it requires great human efforts to construct an ontology and build a supervised knowledge extraction model. The second is that the update frequency of knowledge bases is very slow. To solve these challenges, we propose a never-ending Chinese Knowledge extraction system, CN-DBpedia, which can automatically generate a knowledge base that is of ever-increasing in size and constantly updated. Specially, we reduce the human costs by reusing the ontology of existing knowledge bases and building an end-to-end facts extraction model. We further propose a smart active update strategy to keep the freshness of our knowledge base with little human costs. The 164 million API calls of the published services justify the success of our system.
The impact of isoniazid preventive therapy and antiretroviral therapy on tuberculosis in children infected with HIV in a high tuberculosis incidence setting.
BACKGROUND Tuberculosis (TB) is a major cause of morbidity and mortality among children infected with HIV. Strategies to prevent TB in children include isoniazid preventive therapy (IPT) and antiretroviral therapy (ART). IPT and ART have been reported to reduce TB incidence in adults but there are few studies in children. OBJECTIVE To investigate the combined effect of IPT and ART on TB risk in children infected with HIV. METHODS A cohort analysis was done within a prospective, double-blinded, placebo-controlled trial of isoniazid (INH) compared with placebo in children infected with HIV in Cape Town, South Africa, a high TB incidence setting. In May 2004 the placebo arm was terminated and all children were switched to INH. ART was not widely available at the start of the study, but children were started on ART following the establishment of the national ART program in 2004. Data were analysed using Cox proportional hazard regression. RESULTS After adjusting for age, nutritional status and immunodeficiency at enrolment, INH alone, ART alone and INH combined with ART reduced the risk of TB disease by 0.22 (95% CI 0.09 to 0.53), 0.32 (95% CI 0.07 to 1.55) and 0.11 (95% CI 0.04 to 0.32) respectively. INH reduced the risk of TB disease in children on ART by 0.23 (95% CI 0.05 to 1.00). CONCLUSIONS The finding that IPT may offer additional protection in children on ART has significant public health implications because this offers a possible strategy for reducing TB in children infected with HIV. Widespread use of this strategy will however require screening of children for active TB disease. Trial registration Trial registration-Clinical Trials NCT00330304.
Improving Relation Extraction by Pre-trained Language Representations
Current state-of-the-art relation extraction methods typically rely on a set of lexical, syntactic, and semantic features, explicitly computed in a pre-processing step. Training feature extraction models requires additional annotated language resources, which severely restricts the applicability and portability of relation extraction to novel languages. Similarly, pre-processing introduces an additional source of error. To address these limitations, we introduce TRE, a Transformer for Relation Extraction. Unlike previous relation extraction models, TRE uses pre-trained deep language representations instead of explicit linguistic features to inform the relation classification and combines it with the self-attentive Transformer architecture to effectively model long-range dependencies between entity mentions. TRE allows us to learn implicit linguistic features solely from plain text corpora by unsupervised pre-training, before fine-tuning the learned language representations on the relation extraction task. TRE obtains a new state-of-the-art result on the TACRED and SemEval 2010 Task 8 datasets, achieving a test F1 of 67.4 and 87.1, respectively. Furthermore, we observe a significant increase in sample efficiency. With only 20% of the training examples, TRE matches the performance of our baselines and our model trained from scratch on 100% of the TACRED dataset. We open-source our trained models, experiments, and source code.
Wideband Compact CPW-Fed Circularly Polarized Antenna for Universal UHF RFID Reader
A compact circularly polarized slot antenna is proposed in this communication, which has a much wider operation bandwidth compared to the antennas with similar sizes. This antenna is designed for universal ultrahigh frequency (UHF) radio-frequency identification (RFID) reader applications. The antenna is coplanar waveguide (CPW) fed by an L-shaped feeding line. To achieve good impedance matching and broadband CP operation, two L-shaped strip lines are inserted into the circular slot in the ground plane. The measured 10-dB return loss bandwidth is 380 MHz (618-998 MHz, 47.0% centered at 808 MHz). The measured 3-dB axial ratio bandwidth is about 332 MHz (791-1123 MHz, 34.7% centered at 957 MHz). The overall size of the antenna is 120 × 120 × 0.8 mm3.
Ultra Low Power Capless Low-Dropout Voltage Regulator ( Master Thesis Extended Abstract )
Modern power management System-on-a-Chip (SoC) design demands for fully integrated solutions in order to decrease certain costly features such as the total chip area and the power consumption while maintaining or increasing the fast transient response to signal variations. Low-Dropout (LDO) voltage regulators, as power management devices, must comply with these recent technological and industrial trends. An ultra low power cap-less low-dropout voltage regulator with resistive feedback network and a new dynamic biased, multiloop compensation strategy is proposed. Its dynamic close-loop bandwidth gain and dynamic damping enhance the fast load and line LDO transient responses. These are assured by the output class-AB stage of the error amplifier and the feedback loop of the non-linear derivative current amplifier of the LDO. The proposed LDO, designed for a maximum output current of 50 mA in TSMC 65 nm, requires a quiescent current of 3.7 μA and presents excellent line and load transients (<10%) and fast transient response.
Metadata Embeddings for User and Item Cold-start Recommendations
I present a hybrid matrix factorisation model representing users and items as linear combinations of their content features’ latent factors. The model outperforms both collaborative and content-based models in cold-start or sparse interaction data scenarios (using both user and item metadata), and performs at least as well as a pure collaborative matrix factorisation model where interaction data is abundant. Additionally, feature embeddings produced by the model encode semantic information in a way reminiscent of word embedding approaches, making them useful for a range of related tasks such as tag recommendations.
Visual Analytics in Deep Learning: An Interrogative Survey for the Next Frontiers
Deep learning has recently seen rapid development and received significant attention due to its state-of-the-art performance on previously-thought hard problems. However, because of the internal complexity and nonlinear structure of deep neural networks, the underlying decision making processes for why these models are achieving such performance are challenging and sometimes mystifying to interpret. As deep learning spreads across domains, it is of paramount importance that we equip users of deep learning with tools for understanding when a model works correctly, when it fails, and ultimately how to improve its performance. Standardized toolkits for building neural networks have helped democratize deep learning; visual analytics systems have now been developed to support model explanation, interpretation, debugging, and improvement. We present a survey of the role of visual analytics in deep learning research, which highlights its short yet impactful history and thoroughly summarizes the state-of-the-art using a human-centered interrogative framework, focusing on the Five W's and How (Why, Who, What, How, When, and Where). We conclude by highlighting research directions and open research problems. This survey helps researchers and practitioners in both visual analytics and deep learning to quickly learn key aspects of this young and rapidly growing body of research, whose impact spans a diverse range of domains.
WatchWriter: Tap and Gesture Typing on a Smartwatch Miniature Keyboard with Statistical Decoding
We present WatchWriter, a finger operated keyboard that supports both touch and gesture typing with statistical decoding on a smartwatch. Just like on modern smartphones, users type one letter per tap or one word per gesture stroke on WatchWriter but in a much smaller spatial scale. WatchWriter demonstrates that human motor control adaptability, coupled with modern statistical decoding and error correction technologies developed for smartphones, can enable a surprisingly effective typing performance despite the small watch size. In a user performance experiment entirely run on a smartwatch, 36 participants reached a speed of 22-24 WPM with near zero error rate.
Long-term culture-initiating cells (LTC-IC) produced from CD34+ cord blood cells with limiting dilution method.
OBJECTIVE Even though much progress has been made in defining primitive hematologic cell phenotypes by using flow cytometry and clonogenic methods, the direct method for study of marrow repopulating cells still remains to be elusive. Long Term Culture-Initiating Cells (LTC-IC) are known as the most primitive human hematopoietic cells detectable by in vitro functional assays. METHODS In this study, LTC-IC with limiting dilution assay was used to evaluate repopulating potential of cord blood stem cells. RESULTS CD34 selections from cord blood were completed succesfully with magnetic beads (73,64%±9,12). The average incidence of week 5 LTC-IC was 1: 1966 CD34+ cells (range 1261-2906). CONCLUSION We found that number of LTC-IC obtained from CD34+ cord blood cells were relatively low in numbers when compared to previously reported bone marrow CD34+ cells. This may be due to the lack of some transcription and growth factors along with some cytokines and chemokines released by accessory cells which are necessary for proliferation of cord blood progenitor/stem cells and it presents an area of interest for further studies.
Utilizing Neural Networks and Linguistic Metadata for Early Detection of Depression Indications in Text Sequences
Depression is ranked as the largest contributor to global disability and is also a major reason for suicide. Still, many individuals suffering from forms of depression are not treated for various reasons. Previous studies have shown that depression also has an effect on language usage and that many depressed individuals use social media platforms or the internet in general to get information or discuss their problems. This paper addresses the early detection of depression using machine learning models based on messages on a social platform. In particular, a convolutional neural network based on different word embeddings is evaluated and compared to a classification based on user-level linguistic metadata. An ensemble of both approaches is shown to achieve state-of-the-art results in a current early detection task. Furthermore, the currently popular ERDE score as metric for early detection systems is examined in detail and its drawbacks in the context of shared tasks are illustrated. A slightly modified metric is proposed and compared to the original score. Finally, a new word embedding was trained on a large corpus of the same domain as the described task and is evaluated as well.
Faculty Development for Simulation Programs: Five Issues for the Future of Debriefing Training.
STATEMENT Debriefing is widely recognized as a critically important element of simulation-based education. Simulation educators obtain and/or seek debriefing training from various sources, including workshops at conferences, simulation educator courses, formal fellowships in debriefings, or through advanced degrees. Although there are many options available for debriefing training, little is known about how faculty development opportunities should be structured to maintain and enhance the quality of debriefing within simulation programs. In this article, we discuss 5 key issues to help shape the future of debriefing training for simulation educators, specifically the following: (1) Are we teaching the appropriate debriefing methods? (2) Are we using the appropriate methods to teach debriefing skills? (3) How can we best assess debriefing effectiveness? (4) How can peer feedback of debriefing be used to improve debriefing quality within programs? (5) How can we individualize debriefing training opportunities to the learning needs of our educators?
Abortion of acute ST segment elevation myocardial infarction after reperfusion: incidence, patients' characteristics, and prognosis.
OBJECTIVES To study the incidence and patient characteristics of aborted myocardial infarction in both prehospital and in-hospital thrombolysis. DESIGN Retrospective, controlled, observational study. SETTING Two cities in the Netherlands, one with prehospital thrombolysis, one with in-hospital treatment. PATIENTS 475 patients with suspected acute ST elevation myocardial infarction treated before admission to hospital, 269 patients treated in hospital. MAIN OUTCOME MEASURES Aborted myocardial infarction, defined as the combination of subsiding of cumulative ST segment elevation and depression to < 50% of the level at presentation, together with a rise of creatine kinase of less than twice the upper normal concentration. A stepwise regression analysis was used to test independent predictors for aborted myocardial infarction. RESULTS After correction for "unjustified" thrombolysis, 17.1% of the 468 prehospital treated patients and 4.5% of the 264 in-hospital treated patients fulfilled the criteria for aborted myocardial infarction. There was no difference in age, sex, risk factors, haemodynamic status, and infarct location of aborted myocardial infarction compared with established myocardial infarction. Time to treatment was shorter in the patients with aborted myocardial infarction (86 versus 123 minutes, p = 0.05). A shorter time to treatment, lower ST elevation at presentation, and higher incidence of preinfarction angina were independent predictors for aborted myocardial infarction. Aborted myocardial infarction had a 12 month mortality of 2.2%, significantly less than the 11.6% of established myocardial infarction. CONCLUSION Prehospital thrombolysis is associated with a fourfold increase of aborted myocardial infarction compared with in-hospital treatment. A shorter time to treatment, a lower ST elevation, and a higher incidence of preinfarction angina were predictors of aborted myocardial infarction.
A Comparison of the Impact of Plant – Based and Meat – Based Diets On Overall General Well-Being
The intention of this project is to explore the correlation between dietary habits and reports of overall well-being. Specifically, this study will consider the impact of meat-eating versus non meat eating (vegetarian/vegan) diets. Dietary choices are also considered in comparison to general lifestyle choices. Questionnaires were distributed to students on the Boca Raton campus of Palm Beach State College. The results of this survey indicated that vegetarians believe that dietary choices have a greater impact on well-being than they actually do. In addition, the subjective well-being of vegetarians compared to that of meat eaters showed inconsistent results. This may be attributable to the fact that some vegetarians choose this lifestyle for ethical reasons such as guilt over the slaughter of animals, leading to an increased feeling of well-being. On the other hand, a higher percentage of vegetarians report regular marijuana use, which could lead to depression caused by a chemical imbalance in the brain. However, because most participants in the study were meat eaters, fewer vegetarians were included in the sample. Further exploration with a larger sample base is needed to explain the inconsistent results. Introduction “Food consumption is an everyday activity, one that is crucial for survival and sense of well-being. Many of our social engagements revolve around rituals associated with eating” (Marcus, 2008). What we consciously and unconsciously consume has a profound impact on our body chemistry and affects how we function in the world. The purpose of this project is to increase understanding about the impact of plant-based and meat-based diets on overall wellbeing. In addition, this report considers the role of secondary factors related to diet and their impact on overall well-being. The survey conducted as part of this project was designed to determine whether or not vegetarians have a greater perceived sense of well-being than people who regularly eat meat. Several types of vegetarian diets exist, including vegan (no red meat, fish, poultry, dairy, and eggs), octo-lovo (consume milk, eggs, or both but no red meat, fish, or poultry), pescatarian (consume fish, milk, and eggs but no red meat and poultry), semi-vegetarian (eat fish, poultry and other meats less than once a week) (Fraser, 2009), fruitarian (raw vegan diets based on fruits) and raw-foodist (plant-based diet characterized by a high consumption of uncooked and unprocessed foods, i.e. fruits, vegetables, nuts and seeds) (Craig & Mangels, 2009). Even within these dietary patterns, considerable variations may exist in the extent to which animal products are excluded. While some researchers suggest that a vegetarian diet can lower the risk for many diseases (Fraser, 2009), others warn of “nutrient deficiencies common amongst vegetarians and particularly vegans” (Sabaté, 2003). Vegetarian diets have been described as being deficient in several nutrients, including protein, iron, zinc, calcium, vitamin B12 and A, n-3 fatty acids, and iodine. Numerous studies have demonstrated that the observed deficiencies are usually due to poor meal planning (Leitzmann, 2005). However, according to the American Dietetic Association (2009), a well-balanced vegetarian diet is suitable for all stages of life, from childhood to the elderly, as well as pregnant women and athletes. A vegetarian diet that includes regular consumption of fruits and vegetables is associated with reducing the risk of many diseases, including cardiovascular disease, hypertension, type-2 diabetes, cancer, osteoporosis, renal disease, dementia, diverticular disease, gallstones, rheumatoid arthritis, stroke, cataracts, Alzheimer disease, as well as a general decline in functions associated with aging (Liu, 2003; Leitzmann, 2005). What this research demonstrates is that there are numerous factors to consider when examining the risk for disease or deficiencies amongst vegetarians, such as how meals are planned and whether there is an adequate intake of fruits and vegetables. At the same time, research on meat-based diets demonstrates that a meat-based diet can also be deficient in certain nutrients, but such diets are more commonly identified as a risk factor for disease, which can result in having a negative effect on one’s well-being (Cousens, 2010). A meat-based diet is one-dimensional, meaning it provides exclusively one type of protein. “As it is used in standard nutritional and agricultural writings, the term meat is actually a misnomer. Meat’s correct definition is muscles of animals, and is nothing but wet protein tissues” (Smil, 2002). Looking at meat in this manner, and excluding fish (also a source of protein but providing monounsaturated fatty acids which confer health benefits) from the definition of meat, leads to the conclusion that all meat protein is basically the same. This is an idea that some people debate. However, assuming that all meat proteins are the same, one can conclude that consuming a primarily meat-based diet, which is high in saturated fats, can lead to an array of health issues such as cardiovascular disease, diabetes mellitus, and some cancers (Walker, Rhubart, Pamela, Shawn, Kelling & Lawrence, 2005). These issues are particularly prevalent in the US, where people typically consume diets that are high in meat proteins and saturated fat yet low in fruits, vegetables and whole grains (Walker et al., 2005), a pattern of eating that increases the risk of the aforementioned diseases. However, the impact of meat proteins is different in impoverished countries. For example, in many African countries where nutrient deficiencies are common, an increase in meat and dairy is likely to improve people’s nutritional outcomes and overall health (Walker et al., 2005). Well-being does not rely exclusively on diet but ultimately “what is good for a person” (Crisp, 2008). In general, well-being incorporates a holistic approach, focusing on multiple dimensions that affect quality of life, subsequently leading to a more balanced, healthier, and happier person. Dimensions of well-being are often presented graphically in the form of “wellbeing wheels” which are used to demonstrate the relationships between each dimension, with the premise being that for an individual to be considered “well,” he or she must actively strive to improve in each dimension (Washington State University, 2011). These dimensions include emotional, environmental, financial, intellectual, occupational, physical, social and spiritual aspects, all combining to create general health and wellness (Washington State University, 2011). These dimensions also play a role in the etiology of positive and negative emotional states. There is mounting evidence that positive emotions co-occur with negative emotions, especially during intensely stressful periods of life (Sprangers et al., 2010). Creating a balance between these dimensions of well-being may direct a person to make choices that affect his or her diet either positively or negatively. A study done in Ireland indicates that there is broad-scale support for the impact of diet and lifestyle on mental health. At the same time, the researchers found that people had a poor understanding of food labeling and nutritional claims. The study showed that residents of Northern Ireland, where there is a high rate of reported vegetarians, are much more likely to report positive mental outlooks than those in the Republic of Ireland, where there appear to be fewer vegetarians (National Food Survey of Ireland, 2005). Examining the correlation between diet and well-being further, it has been shown that foods high in fat have the power to modify motivation and reward systems in the brain. It has been found that certain neuropeptides are activated during activities involving reward and pleasure. Similarly, use of cocaine and nicotine also activate these same reward centers, even with only the expectation of consumption of fatty foods (Choi, Davis, Fitzgerald, & Benoit, 2009). It has also been found that binge eating and overconsumption of fat and sugar lead to an increased number of opioid receptors in the part of the brain that modulates food intake. In other words, eating fatty and sugary foods trigger the same reward mechanisms in the brain as cocaine and nicotine (Bello et al., 2009). As a result, a person may tend to over-eat fatty and sugary foods, which could lead to a variety of health issues. What we consume can have a significant effect on our mood, which is another dimension of well-being. “Your brain is a biochemical thinking machine, and all of the biochemical building blocks of your brain eventually are affected by what you eat. Even the genes you inherited from your parents are influenced by what you put in your mouth” (Challam, 2007). It has been found that loneliness can have a powerful effect on mood, shyness, anxiety, and self-esteem. Moreover, popular concepts such as committing acts of kindness, expressing gratitude or forgiveness, and thoughtful self-reflection can produce an increase in levels of happiness (Sprangers et al., 2010). The food we choose to consume often paves the way for our mood and behavior (Challam, 2007). It has long been known that food alters our mood and that too much meat can lead to health problems. “It takes only 3 ounces of meat a day to maximize all of its nutritional benefits. Consumption of any more and the increased intake of saturated fat, protein, and cholesterol will compromise your health and increase your risk of developing degenerative diseases” (Somer, 1995). By comparison, a “vegetarian diet is not likely associated with poor mood states or depression” (Beezhold, Daigle, & Johnston, 2010). It has been shown that a vegetarian diet can prevent many health problems, which in turn can impact our mood. To illustrate, persons diagnosed with heart disease who implement a vegetarian diet into their lifestyle can reap the positive bene
Mobile Phone Sensing Systems: A Survey
Mobile phone sensing is an emerging area of interest for researchers as smart phones are becoming the core communication device in people's everyday lives. Sensor enabled mobile phones or smart phones are hovering to be at the center of a next revolution in social networks, green applications, global environmental monitoring, personal and community healthcare, sensor augmented gaming, virtual reality and smart transportation systems. More and more organizations and people are discovering how mobile phones can be used for social impact, including how to use mobile technology for environmental protection, sensing, and to leverage just-in-time information to make our movements and actions more environmentally friendly. In this paper we have described comprehensively all those systems which are using smart phones and mobile phone sensors for humans good will and better human phone interaction.
Doppelgänger Finder: Taking Stylometry to the Underground
Stylometry is a method for identifying anonymous authors of anonymous texts by analyzing their writing style. While stylometric methods have produced impressive results in previous experiments, we wanted to explore their performance on a challenging dataset of particular interest to the security research community. Analysis of underground forums can provide key information about who controls a given bot network or sells a service, and the size and scope of the cybercrime underworld. Previous analyses have been accomplished primarily through analysis of limited structured metadata and painstaking manual analysis. However, the key challenge is to automate this process, since this labor intensive manual approach clearly does not scale. We consider two scenarios. The first involves text written by an unknown cybercriminal and a set of potential suspects. This is standard, supervised stylometry problem made more difficult by multilingual forums that mix l33t-speak conversations with data dumps. In the second scenario, you want to feed a forum into an analysis engine and have it output possible doppelgangers, or users with multiple accounts. While other researchers have explored this problem, we propose a method that produces good results on actual separate accounts, as opposed to data sets created by artificially splitting authors into multiple identities. For scenario 1, we achieve 77% to 84% accuracy on private messages. For scenario 2, we achieve 94% recall with 90% precision on blogs and 85.18% precision with 82.14% recall for underground forum users. We demonstrate the utility of our approach with a case study that includes applying our technique to the Carders forum and manual analysis to validate the results, enabling the discovery of previously undetected doppelganger accounts.
Making the V in VQA Matter: Elevating the Role of Image Understanding in Visual Question Answering
Problems at the intersection of vision and language are of significant importance both as challenging research questions and for the rich set of applications they enable. However, inherent structure in our world and bias in our language tend to be a simpler signal for learning than visual modalities, resulting in models that ignore visual information, leading to an inflated sense of their capability. We propose to counter these language priors for the task of Visual Question Answering (VQA) and make vision (the V in VQA) matter! Specifically, we balance the popular VQA dataset (Antol et al., ICCV 2015) by collecting complementary images such that every question in our balanced dataset is associated with not just a single image, but rather a pair of similar images that result in two different answers to the question. Our dataset is by construction more balanced than the original VQA dataset and has approximately twice the number of image-question pairs. Our complete balanced dataset is publicly available at http://visualqa.org/ as part of the 2nd iteration of the Visual Question Answering Dataset and Challenge (VQA v2.0). We further benchmark a number of state-of-art VQA models on our balanced dataset. All models perform significantly worse on our balanced dataset, suggesting that these models have indeed learned to exploit language priors. This finding provides the first concrete empirical evidence for what seems to be a qualitative sense among practitioners. Finally, our data collection protocol for identifying complementary images enables us to develop a novel interpretable model, which in addition to providing an answer to the given (image, question) pair, also provides a counter-example based explanation. Specifically, it identifies an image that is similar to the original image, but it believes has a different answer to the same question. This can help in building trust for machines among their users.
A visual analytics loop for supporting model development
Threats in cybersecurity come in a variety of forms, and combating such threats involves handling a huge amount of data from different sources. It is absolutely necessary to use algorithmic models to defend against these threats. However, all models are sensitive to deviation from the original contexts in which the models were developed. Hence, it is not really an overstatement to say that `all models are wrong'. In this paper, we propose a visual analytics loop for supporting the continuous development of models during their deployment. We describe the roles of three types of operators (monitors, analysts and modelers), present the visualization techniques used at different stages of model development, and demonstrate the utility of this approach in conjunction with a prototype software system for corporate insider threat detection. In many ways, our environment facilitates an agile approach to the development and deployment of models in cybersecurity.
Eyeball model-based iris center localization for visible image-based eye-gaze tracking systems
In general, the visible image-based eye-gaze tracking system is heavily dependent on the accuracy of the iris center (IC) localization. In this paper, we propose a novel IC localization method based on the fact that the elliptical shape (ES) of the iris varies according to the rotation of the eyeball. We use the spherical model of the human eyeball and estimate the radius of the iris from the frontal and upright-view image of the eye. By projecting the eyeball rotated in pitch and yaw onto the 2-D plane, a certain number of the ESs of the iris and their corresponding IC locations are generated and registered as a database (DB). Finally, the location of IC is detected by matching the ES of the iris of the input eye image with the ES candidates in the DB. Moreover, combined with facial landmark points-based image rectification, the proposed IC localization method can successfully operate under natural head movement. Experimental results in terms of the IC localization and gaze tracking show that the proposed method achieves superior performance compared with conventional ones.
Solving the 0-1 Knapsack Problem with Genetic Algorithms
This paper describes a research project on using Genetic Algorithms (GAs) to solve the 0-1 Knapsack Problem (KP). The Knapsack Problem is an example of a combinatorial optimization problem, which seeks to maximize the benefit of objects in a knapsack without exceeding its capacity. The paper contains three sections: brief description of the basic idea and elements of the GAs, definition of the Knapsack Problem, and implementation of the 0-1 Knapsack Problem using GAs. The main focus of the paper is on the implementation of the algorithm for solving the problem. In the program, we implemented two selection functions, roulette-wheel and group selection. The results from both of them differed depending on whether we used elitism or not. Elitism significantly improved the performance of the roulette-wheel function. Moreover, we tested the program with different crossover ratios and single and double crossover points but the results given were not that different.
Natural Language Semantics Using Probabilistic Logic
With better natural language semantic representations, computers can do more applications more efficiently as a result of better understanding of natural text. However, no single semantic representation at this time fulfills all requirements needed for a satisfactory representation. Logic-based representations like first-order logic capture many of the linguistic phenomena using logical constructs, and they come with standardized inference mechanisms, but standard first-order logic fails to capture the “graded” aspect of meaning in languages. Distributional models use contextual similarity to predict the “graded” semantic similarity of words and phrases but they do not adequately capture logical structure. In addition, there are a few recent attempts to combine both representations either on the logic side (still, not a graded representation), or in the distribution side(not full logic). We propose using probabilistic logic to represent natural language semantics combining the expressivity and the automated inference of logic, and the gradedness of distributional representations. We evaluate this semantic representation on two tasks, Recognizing Textual Entailment (RTE) and Semantic Textual Similarity (STS). Doing RTE and STS better is an indication of a better semantic understanding. Our system has three main components, 1. Parsing and Task Representation, 2. Knowledge Base Construction, and 3. Inference. The input natural sentences of the RTE/STS task are mapped to logical form using Boxer which is a rule based system built on top of a CCG parser, then they are used to formulate the RTE/STS problem in probabilistic logic. Then, a knowledge base is represented as weighted inference rules collected from different sources like WordNet and on-the-fly lexical rules from distributional semantics. An advantage of using probabilistic logic is that more rules can be added from more resources easily by mapping them to logical rules and weighting them appropriately. The last component is the inference, where we solve the probabilistic logic inference problem using an appropriate probabilistic logic tool like Markov Logic Network (MLN), or Probabilistic Soft Logic (PSL). We show how to solve the inference problems in MLNs efficiently for RTE using a modified closed-world assumption and a new inference algorithm, and how to adapt MLNs and PSL for STS by relaxing conjunctions. Experiments show that our semantic representation can handle RTE and STS reasonably well. For the future work, our short-term goals are 1. better RTE task representation and finite domain handling, 2. adding more inference rules, precompiled and on-the-fly, 3. generalizing the modified closed– world assumption, 4. enhancing our inference algorithm for MLNs, and 5. adding a weight learning step to better adapt the weights. On the longer-term, we would like to apply our semantic representation to the question answering task, support generalized quantifiers, contextualize WordNet rules we use, apply our semantic representation to languages other than English, and implement a probabilistic logic Inference Inspector that can visualize the proof structure. Report Documentation Page Form Approved OMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information, including suggestions for reducing this burden, to Washington Headquarters Services, Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204, Arlington VA 22202-4302. Respondents should be aware that notwithstanding any other provision of law, no person shall be subject to a penalty for failing to comply with a collection of information if it does not display a currently valid OMB control number. 1. REPORT DATE OCT 2014 2. REPORT TYPE 3. DATES COVERED 00-00-2014 to 00-00-2014 4. TITLE AND SUBTITLE Natural Language Semantics using Probabilistic Logic 5a. CONTRACT NUMBER
An oscillatory hierarchy controlling neuronal excitability and stimulus processing in the auditory cortex.
EEG oscillations are hypothesized to reflect cyclical variations in the neuronal excitability, with particular frequency bands reflecting differing spatial scales of brain operation. However, despite decades of clinical and scientific investigation, there is no unifying theory of EEG organization, and the role of ongoing activity in sensory processing remains controversial. This study analyzed laminar profiles of synaptic activity [current source density CSD] and multiunit activity (MUA), both spontaneous and stimulus-driven, in primary auditory cortex of awake macaque monkeys. Our results reveal that the EEG is hierarchically organized; delta (1-4 Hz) phase modulates theta (4-10 Hz) amplitude, and theta phase modulates gamma (30-50 Hz) amplitude. This oscillatory hierarchy controls baseline excitability and thus stimulus-related responses in a neuronal ensemble. We propose that the hierarchical organization of ambient oscillatory activity allows auditory cortex to structure its temporal activity pattern so as to optimize the processing of rhythmic inputs.
A Deep-Reinforcement Learning Approach for Software-Defined Networking Routing Optimization
In this paper we design and evaluate a Deep-Reinforcement Learning agent that optimizes routing. Our agent adapts automatically to current traffic conditions and proposes tailored configurations that attempt to minimize the network delay. Experiments show very promising performance. Moreover, this approach provides important operational advantages with respect to traditional optimization algorithms.
Sign language recognition : Generalising to more complex corpora
The aim of this thesis is to find new approaches to Sign Language Recognition (SLR) which are suited to working with the limited corpora currently available. Data available for SLR is of limited quality; low resolution and frame rates make the task of recognition even more complex. The content is rarely natural, concentrating on isolated signs and filmed under labo­ ratory conditions. In addition, the amount of accurately labelled data is minimal. To this end, several contributions are made: Tracking the hands is eschewed in favour of detection based techniques more robust to noise; for both signs and for linguistically-motivated sign sub-units are investigated, to make best use of limited data sets. Finally, an algorithm is proposed to learn signs fiom the inset signers on TV, with the aid of the accompanying subtitles, thus increasing the corpus of data available. Tracking fast moving hands under laboratory conditions is a complex task, move this to real world data and the challenge is even greater. When using tracked data as a base for SLR, the er­ rors in the tracking are compounded at the classification stage. Proposed instead, is a novel sign detection method, which views space-time as a 3D volume and the sign within it as an object to be located. Features are combined into strong classifiers using a novel boosting implementation designed to create optimal classifiers over sparse datasets. Using boosted volumetric features, on a robust frame differenced input, average classification rates reach 71% on seen signers and 66% on a mixture of seen and unseen signers, with individual sign classification rates gaining 95%. Using a classifier per sign approach to SLR, means that data sets need to contain numerous examples of the signs to be learnt. Instead, this thesis proposes lear nt classifiers to detect the common sub-units of sign. The responses of these classifiers can then be combined for recognition at the sign level. This approach requires fewer examples per sign to be learnt, since the sub-unit detectors are trained on data from multiple signs. It is also faster at detection time since there are fewer classifiers to consult, the number of these being limited by the linguistics of sign and not the number of signs being detected. For this method, appearance based boosted classifiers are introduced to distinguish the sub-units of sign. Results show that when combined with temporal models, these novel sub-unit classifiers, can outperform similar classifiers learnt on tracked results. As an added side effect; since the sub-units are linguistically derived tliey can be used independently to help linguistic annotators. Since sign language data sets are costly to collect and annotate, there are not many publicly available. Those which are, tend to be constiained in content and often taken under laboratory conditions. However, in the UK, the British Broadcasting Corporation (BBC) regularly pro­ duces programs with an inset signer and corresponding subtitles. This provides a natural signer, covering a wide range of topics, in real world conditions. While it has no ground truth, it is proposed tliat the tr anslated subtitles can provide weak labels for learning signs. The final con­ tributions of this tliesis, lead to an innovative approach to learn signs from these co-occurring streams of data. Using a unique, temporally constr ained, version of the Apriori mining algo­ rithm, similar sections of video are identified as possible sign locations. These estimates are improved upon by introducing the concept of contextual negatives, removing contextually sim­ ilar" noise. Combined with an iterative honing process, to enhance the localisation of the target sign, 23 word/sign combinations are learnt from a 30 minute news broadcast, providing a novel rnetliod for automatic data set creation.
Digital image processing using MATLAB
All rights reserved. No part of this book may be reproduced or transmitted in any form or by any means, without permission in writing from the publisher. The author and publisher of this book have used their best efforts in preparing this book. These efforts include the development, research, and testing of the theories and programs to determine their effectiveness. The author and publisher shall not be liable in any event for incidental or consequential damages with, or arising out of, the furnishing, performance, or use of these programs. 1 1 Introduction Preview Digital image processing is an area characterized by the need for extensive experimental work to establish the viability of proposed solutions to a given problem. In this chapter we outline how a theoretical base and state-of-the-art software can be integrated into a prototyping environment whose objective is to provide a set of well-supported tools for the solution of a broad class of problems in digital image processing. Background An important characteristic underlying the design of image processing systems is the significant level of testing and experimentation that normally is required before arriving at an acceptable solution. This characteristic implies that the ability to formulate approaches and quickly prototype candidate solutions generally plays a major role in reducing the cost and time required to arrive at a viable system implementation. Little has been written in the way of instructional material to bridge the gap between theory and application in a well-supported software environment. The main objective of this book is to integrate under one cover a broad base of theoretical concepts with the knowledge required to implement those concepts using state-of-the-art image processing software tools. The theoretical underpinnings of the material in the following chapters are mainly from the leading textbook in the field: Digital Image Processing, by Gonzalez and Woods, published by Prentice Hall. The software code and supporting tools are based on the leading software package in the field: The MATLAB Image Processing Toolbox, † 1.1 † In the following discussion and in subsequent chapters we sometimes refer to Digital Image Processing by Gonzalez and Woods as " the Gonzalez-Woods book, " and to the Image Processing Toolbox as " IPT " or simply as the " toolbox. " 2 Chapter 1 I Introduction from The MathWorks, Inc. (see Section 1.3). The material in the present book shares the same design, notation, and style of presentation …
ZooBP: Belief Propagation for Heterogeneous Networks
Given a heterogeneous network, with nodes of di↵erent types – e.g., products, users and sellers from an online recommendation site like Amazon – and labels for a few nodes (‘honest’, ‘suspicious’, etc), can we find a closed formula for Belief Propagation (BP), exact or approximate? Can we say whether it will converge? BP, traditionally an inference algorithm for graphical models, exploits so-called “network e↵ects” to perform graph classification tasks when labels for a subset of nodes are provided; and it has been successful in numerous settings like fraudulent entity detection in online retailers and classification in social networks. However, it does not have a closed-form nor does it provide convergence guarantees in general. We propose ZooBP, a method to perform fast BP on undirected heterogeneous graphs with provable convergence guarantees. ZooBP has the following advantages: (1) Generality: It works on heterogeneous graphs with multiple types of nodes and edges; (2) Closed-form solution: ZooBP gives a closed-form solution as well as convergence guarantees; (3) Scalability: ZooBP is linear on the graph size and is up to 600⇥ faster than BP, running on graphs with 3.3 million edges in a few seconds. (4) E↵ectiveness: Applied on real data (a Flipkart e-commerce network with users, products and sellers), ZooBP identifies fraudulent users with a near-perfect precision of 92.3 % over the top 300 results.
F-Band Bidirectional Amplifier Using 75-nm InP HEMTs
We have developed an F-band (90 to 140 GHz) bidirectional amplifier MMIC using a 75-nm InP HEMT technology for short-range millimeter-wave multi-gigabit communication systems. Inherent symmetric common-gate transistors and parallel circuits consisting of an inductor and a switch realizes a bidirectional operation with a wide bandwidth of over 50 GHz. Small signal gains of 12-15 dB and 9-12 dB were achieved in forward and reverse directions, respectively. Fractional bandwidths of the developed bidirectional amplifier were 39% for the forward direction and 32% for the reverse direction, which were almost double as large as those of conventional bidirectional amplifiers. The power consumption of the bidirectional amplifier was 15 mW under a 2.4-V supply. The chip measures 0.70 × 0.65 mm. The simulated NF is lower than 5 dB, and Psat is larger than 5 dBm. The use of this bidirectional amplifier provides miniaturization of the multi-gigabit communication systems and eliminates signal switching loss.
Process and product certification arguments: getting the balance right
Many current safety certification standards are process-based, i.e. they prescribe a set of development techniques and methods. This is perhaps best exemplified by the use of Safety Integrity Levels (SILs), e.g. as defined by IEC 61508 and UK Defence Standard 00-55. SILs are defined according to the level of the risk posed by a system, and hence prescribe the tools, techniques and methods that should be adopted by the development and assessment lifecycle. Product-based certification relies on the generation and assurance of product-specific evidence that meets safety requirements derived from hazard analysis. This evidence can be used as the argument basis in a safety case. However, uncertainty about the provenance of evidence in such a safety case can undermine confidence. To address this problem, we argue that process arguments remain an essential element of any safety case. However, unlike the sweeping process-based integrity arguments of the past, we suggest instead that highly directed process arguments should be linked to the items of evidence used in the product case. Such arguments can address issues of tool integrity, competency of personnel, and configuration management. Much as deductive software safety arguments are desirable, there will always be inductive elements. Process-based arguments of the type we suggest address partly this problem by tackling the otherwise implicit assumptions underlying certification evidence.
Investigation and analysis of 102 mushroom poisoning cases in Southern China from 1994 to 2012
Mushroom poisoning is the main cause of mortality in food poisoning incidents in China. Although some responsible mushroom species have been identified, some were identified inaccuratly. This study investigated and analyzed 102 mushroom poisoning cases in southern China from 1994 to 2012, which involved 852 patients and 183 deaths, with an overall mortality of 21.48 %. The results showed that 85.3 % of poisoning cases occurred from June to September, and involved 16 species of poisonous mushroom: Amanita species (A. fuliginea, A. exitialis, A. subjunquillea var. alba, A. cf. pseudoporphyria, A. kotohiraensis, A. neoovoidea, A. gymnopus), Galerina sulciceps, Psilocybe samuiensis, Russula subnigricans, R. senecis, R. japonica, Chlorophyllum molybdites, Paxillus involutus, Leucocoprinus cepaestipes and Pulveroboletus ravenelii. Six species (A. subjunquillea var. alba, A. cf. pseudoporphyria, A. gymnopus, R. japonica, Psilocybe samuiensis and Paxillus involutus) are reported for the first time in poisoning reports from China. Psilocybe samuiensis is a newly recorded species in China. The genus Amanita was responsible for 70.49 % of fatalities; the main lethal species were A. fuliginea and A. exitialis. Russula subnigricans caused 24.59 % of fatalities, and five species showed mortality >20 % (A. fuliginea, A. exitialis, A. subjunquillea var. alba, R. subnigricans and Paxillus involutus). Mushroom poisoning symptoms were classified from among the reported clinical symptoms. Seven types of mushroom poisoning symptoms were identified for clinical diagnosis and treatment in China, including gastroenteritis, acute liver failure, acute renal failure, psychoneurological disorder, hemolysis, rhabdomyolysis and photosensitive dermatitis.
Stravinsky and the Russian traditions : a biography of the works through Mavra
During his spectacular career, Stravinsky underplayed his Russian past in favour of a European cosmopolitanism. Richard Taruskin has refused to take the composer at his word. In this landmark study, he defines Stravinsky's relationship to the musical and artistic traditions of his native land and gives us a dramatically new picture of one of the major figures in the history of music. Taruskin draws directly on newly accessible archives and on a wealth of Russian documents. In Volume One, he sets the historical scene: the St Petersburg musical press, the arts journals, and the writings of anthropologists, folklorists, philosophers, and poets. Volume Two addresses the masterpieces of Stravinsky's early maturity - Petrushka, The Rite of Spring, Les Noces. Taruskin investigates the composer's collaborations with Diaghilev to illuminate the relationship between folklore and modernity. He elucidates the Silver Age ideal of 'neonationalism' - the prefessional appropriation of motifs and style characteristics from folk art - and how Stravinsky realized this ideal in his music. Taruskin demonstrates how Stravinsky achieved his modernist technique by combining what was most characteristically Russian in his musical training with stylistic elements abstracted from Russian folklore. The stylistic synthesis thus achieved formed Stravinsky as a composer for life, whatever the aesthetic allegiances he later professed. Written with Taruskin's characteristic in-depth research and stylistic verve, this book will be mandatory reading for all those seriously interested in the life and work of Stravinsky.
MicroRNA-221 Induces Cell Survival and Cisplatin Resistance through PI3K/Akt Pathway in Human Osteosarcoma
BACKGROUND MicroRNAs are short regulatory RNAs that negatively modulate protein expression at a post-transcriptional and/or translational level and are deeply involved in the pathogenesis of several types of cancers. Specifically, microRNA-221 (miR-221) is overexpressed in many human cancers, wherein accumulating evidence indicates that it functions as an oncogene. However, the function of miR-221 in human osteosarcoma has not been totally elucidated. In the present study, the effects of miR-221 on osteosarcoma and the possible mechanism by which miR-221 affected the survival, apoptosis, and cisplatin resistance of osteosarcoma were investigated. METHODOLOGY/PRINCIPAL FINDINGS Real-time quantitative PCR analysis revealed miR-221 was significantly upregulated in osteosarcoma cell lines than in osteoblasts. Both human osteosarcoma cell lines SOSP-9607 and MG63 were transfected with miR-221 mimic or inhibitor to regulate miR-221 expression. The effects of miR-221 were then assessed by cell viability, cell cycle analysis, apoptosis assay, and cisplatin resistance assay. In both cells, upregulation of miR-221 induced cell survival and cisplatin resistance and reduced cell apoptosis. In addition, knockdown of miR-221 inhibited cell growth and cisplatin resistance and induced cell apoptosis. Potential target genes of miR-221 were predicted using bioinformatics. Moreover, luciferase reporter assay and western blot confirmed that PTEN was a direct target of miR-221. Furthermore, introduction of PTEN cDNA lacking 3'-UTR or PI3K inhibitor LY294002 abrogated miR-221-induced cisplatin resistance. Finally, both miR-221 and PTEN expression levels in osteosarcoma samples were examined by using real-time quantitative PCR and immunohistochemistry. High miR-221 expression level and inverse correlation between miR-221 and PTEN levels were revealed in osteosarcoma tissues. CONCLUSIONS/SIGNIFICANCE These results for the first time demonstrate that upregulation of miR-221 induces the malignant phenotype of human osteosarcoma whereas knockdown of miR-221 reverses this phenotype, suggesting that miR-221 could be a potential target for osteosarcoma treatment.
Youth appraisals of inter-parental conflict and genetic and environmental contributions to attention-deficit hyperactivity disorder: examination of GxE effects in a twin sample.
Identification of gene x environment interactions (GxE) for attention-deficit hyperactivity disorder (ADHD) is a crucial component to understanding the mechanisms underpinning the disorder, as prior work indicates large genetic influences and numerous environmental risk factors. Building on prior research, children's appraisals of self-blame were examined as a psychosocial moderator of latent etiological influences on ADHD via biometric twin models, which provide an omnibus test of GxE while managing the potential confound of gene-environment correlation. Participants were 246 twin pairs (total n = 492) ages 6-16 years. ADHD behaviors were assessed via mother report on the Child Behavior Checklist. To assess level of self-blame, each twin completed the Children's Perception of Inter-parental Conflict scale. Two biometric GxE models were fit to the data. The first model revealed a significant decrease in genetic effects and a significant increase in unique environmental influences on ADHD with increasing levels of self-blame. These results generally persisted even after controlling for confounding effects due to gene-environment correlation in the second model. Results suggest that appraisals of self-blame in relation to inter-parental conflict may act as a key moderator of etiological contributions to ADHD.
Picking Up My Tab: Understanding and Mitigating Synchronized Token Lifting and Spending in Mobile Payment
Mobile off-line payment enables purchase over the counter even in the absence of reliable network connections. Popular solutions proposed by leading payment service providers (e.g., Google, Amazon, Samsung, Apple) rely on direct communication between the payer’s device and the POS system, through Near-Field Communication (NFC), Magnetic Secure Transaction (MST), audio and QR code. Although pre-cautions have been taken to protect the payment transactions through these channels, their security implications are less understood, particularly in the presence of unique threats to this new e-commerce service. In the paper, we report a new type of over-the-counter payment frauds on mobile off-line payment, which exploit the designs of existing schemes that apparently fail to consider the adversary capable of actively affecting the payment process. Our attack, called Synchronized Token Lifting and Spending (STLS), demonstrates that an active attacker can sniff the payment token, halt the ongoing transaction through various means and transmit the token quickly to a colluder to spend it in a different transaction while the token is still valid. Our research shows that such STLS attacks pose a realistic threat to popular offline payment schemes, particularly those meant to be backwardly compatible, like Samsung Pay and AliPay. To mitigate the newly discovered threats, we propose a new solution called POSAUTH. One fundamental cause of the STLS risk is the nature of the communication channels used by the vulnerable mobile off-line payment schemes, which are easy to sniff and jam, and more importantly, unable to support a secure mutual challenge-response protocols since information can only be transmitted in one-way. POSAUTH addresses this issue by incorporating one unique ID of the current POS terminal into the generation of payment tokens by requiring a quick scan∗The two lead authors are ordered alphabetically. †Corresponding author. ning of QR code printed on the POS terminal. When combined with a short valid period, POSAUTH can ensure that tokens generated for one transaction can only be used in that transaction.
Fire detection for early fire alarm based on optical flow video processing
We propose a method for early fire detection based on the Lucas-Kanade optical flow algorithm that is able to detect fire in real time in a video stream from a monocular camera. The method works both indoors and in open areas. It detects fire at the beginning of the burning process, enabling an earlier response than would be possible with a conventional fire detector. The method performs background subtraction to identify moving pixels in the scene then filters for colors consistent with fire. Growth rate analysis is applied to the contours identified in the previous step. We further propose the use of a Lucas-Kanade optical flow pyramid in the regions identified in the growth analysis step and analysis of the variance of the optical flow feature points' motion. In an experiment comparing the growth rate method alone with the combination of growth rate and optical flow analysis, we find that the algorithm improves the accuracy of fire detection for early fire alarming and reduce false alarm by solving the fire-like color.
Real-Time Classification of Twitter Trends
The community of users participating in social media tends to share about common interests at the same time, giving rise to what are known as social trends. A social trend reflects the voice of a large number of users which, for some reason, becomes popular in a specific moment. Through social trends, users therefore suggest that some occurrence of wide interest is taking place and subsequently triggering the trend. In this work, we explore the types of triggers that spark trends on the microblogging site Twitter, and introduce a typology that includes the following four types: news, ongoing events, memes, and commemoratives. While previous research has analyzed the characteristics of trending topics in a long term, we look instead at the earliest tweets that produce the trend, with the aim of categorizing trends early on and providing a filtered subset of trends to end users. We propose, analyze and experiment with a set of straightforward languageindependent features that rely on the social spread of the trends to discriminate among those types of trending topics. Our method provides an efficient way to immediately and accurately categorize trending topics without need of external data, enabling news organizations to track and discover breaking news in real-time, or quickly identify viral memes that might enrich marketing decisions, among others. The analysis of social features as observed in social trends also reveals social patterns associated with each type of trend, such as tweets related to ongoing events being shorter as many of the tweets were likely sent from mobile devices, or memes having more retweets originating from fewer users than for other kinds of
Phase I clinical evaluation of [SP-4-3(R)]-[1,1-cyclobutanedicarboxylato(2-)](2-methyl-1,4-butanediamine-N,N 1) platinum in patients with metastatic solid tumors
The development of clinically useful drugs is a priority of clinical cancer research. CI-973, [SP-4-3(R)]-[1,1-cyclobutanedicarboxylato(2-)](2-methyl-1,4-butanediamine-N,N 1) platinum, has been shown in preclinical murine and human tumor models to have activity equivalent or superior to that of cisplatin and carboplatin and to exert activity against cisplatin-resistant cell lines. In addition, preclinical testing suggests a reduced toxicity profile for CI-973 as compared with currently available drugs, especially decreased nephrotoxicity, ototoxicity, and gastrointestinal toxicity. A total of 29 (28 evaluable) patients with solid tumors were treated with intravenous CI-973 given over 30 min every 4 weeks. No routine pre- or posttreatment hydration or antiemetic program was used. The CI-973 doses given were 75, 150, 170, 188, 230, and 290 mg/m2. The dose-limiting toxicity was granulocytopenia. Nausea and vomiting occurred in the majority of patients but was mild to moderate in severity. No renal or auditory toxicity was seen. The maximum tolerated dose (MTD) for patients who had a good performance status, had not received prior radiation therapy to bone marrow, and had not previously been exposed to platinum or stem-cell toxin was 290 mg/m2. For those who had received prior radiation therapy, had a performance status of 2 or worse, or had previously been exposed to platinum or stem-cell toxin, the MTD was 230 mg/m2. The recommended phase II starting doses for these groups of patients are 230 and 190 mg/m2, respectively. No clinical tumor response was seen in this phase I study.
Inheritance of coronary artery disease in men: an analysis of the role of the Y chromosome
BACKGROUND A sexual dimorphism exists in the incidence and prevalence of coronary artery disease--men are more commonly affected than are age-matched women. We explored the role of the Y chromosome in coronary artery disease in the context of this sexual inequity. METHODS We genotyped 11 markers of the male-specific region of the Y chromosome in 3233 biologically unrelated British men from three cohorts: the British Heart Foundation Family Heart Study (BHF-FHS), West of Scotland Coronary Prevention Study (WOSCOPS), and Cardiogenics Study. On the basis of this information, each Y chromosome was tracked back into one of 13 ancient lineages defined as haplogroups. We then examined associations between common Y chromosome haplogroups and the risk of coronary artery disease in cross-sectional BHF-FHS and prospective WOSCOPS. Finally, we undertook functional analysis of Y chromosome effects on monocyte and macrophage transcriptome in British men from the Cardiogenics Study. FINDINGS Of nine haplogroups identified, two (R1b1b2 and I) accounted for roughly 90% of the Y chromosome variants among British men. Carriers of haplogroup I had about a 50% higher age-adjusted risk of coronary artery disease than did men with other Y chromosome lineages in BHF-FHS (odds ratio 1·75, 95% CI 1·20-2·54, p=0·004), WOSCOPS (1·45, 1·08-1·95, p=0·012), and joint analysis of both populations (1·56, 1·24-1·97, p=0·0002). The association between haplogroup I and increased risk of coronary artery disease was independent of traditional cardiovascular and socioeconomic risk factors. Analysis of macrophage transcriptome in the Cardiogenics Study revealed that 19 molecular pathways showing strong differential expression between men with haplogroup I and other lineages of the Y chromosome were interconnected by common genes related to inflammation and immunity, and that some of them have a strong relevance to atherosclerosis. INTERPRETATION The human Y chromosome is associated with risk of coronary artery disease in men of European ancestry, possibly through interactions of immunity and inflammation. FUNDING British Heart Foundation; UK National Institute for Health Research; LEW Carty Charitable Fund; National Health and Medical Research Council of Australia; European Union 6th Framework Programme; Wellcome Trust.
A coherent projection approach for direct volume rendering
Direct volume rendering offers the opportunity to visualize all of a three-dimensional sample volume in one image. However, processing such images can be very expensive and good quality high-resolution images are far from interactive. Projection approaches to direct volume rendering process the volume region by region as opposed to ray-casting methods that process it ray by ray. Projection approaches have generated interest because they use coherence to provide greater speed than ray casting and generate the image in a layered, informative fashion. This paper discusses two topics: First, it introduces a projection approach for directly rendering rectilinear, parallel-projected sample volumes that takes advantage of coherence across cells and the identical shape of their projection. Second, it considers the repercussions of various methods of integration in depth and interpolation across the scan plane. Some of these methods take advantage of Gouraud-shading hardware, with advantages in speed but potential disadvantages in image quality.