title
stringlengths
8
300
abstract
stringlengths
0
10k
Player Typology in Theory and Practice
Player satisfaction modeling depends in part upon quantitative or qualitative typologies of playing preferences, although such approaches require scrutiny. Examination of psychometric typologies reveal that type theories have—except in rare cases—proven inadequate and have made way for alternative trait theories. This suggests any future player typology that will be sufficiently robust will need foundations in the form of a trait theory of playing preferences. This paper tracks the development of a sequence of player typologies developing from psychometric type theory roots towards an independently validated trait theory of play, albeit one yet to be fully developed. Statistical analysis of the results of one survey in this lineage is presented, along with a discussion of theoretical and practical ways in which the surveys and their implied typological instruments have evolved.
SNPs in DNA repair or oxidative stress genes and late subcutaneous fibrosis in patients following single shot partial breast irradiation
BACKGROUND The aim of this study was to evaluate the potential association between single nucleotide polymorphisms related response to radiotherapy injury, such as genes related to DNA repair or enzymes involved in anti-oxidative activities. The paper aims to identify marker genes able to predict an increased risk of late toxicity studying our group of patients who underwent a Single Shot 3D-CRT PBI (SSPBI) after BCS (breast conserving surgery). METHODS A total of 57 breast cancer patients who underwent SSPBI were genotyped for SNPs (single nucleotide polymorphisms) in XRCC1, XRCC3, GST and RAD51 by Pyrosequencing technology. Univariate analysis (ORs and 95% CI) was performed to correlate SNPs with the risk of developing ≥ G2 fibrosis or fat necrosis. RESULTS A higher significant risk of developing ≥ G2 fibrosis or fat necrosis in patients with: polymorphic variant GSTP1 (Ile105Val) (OR = 2.9; 95%CI, 0.88-10.14, p = 0.047). CONCLUSIONS The presence of some SNPs involved in DNA repair or response to oxidative stress seem to be able to predict late toxicity. TRIAL REGISTRATION ClinicalTrials.gov: NCT01316328.
Efficacy and safety of oral magnesium supplementation in the treatment of depression in the elderly with type 2 diabetes: a randomized, equivalent trial.
To evaluate the efficacy and safety of oral magnesium supplementation, with magnesium chloride (MgCl2), in the treatment of newly diagnosed depression in the elderly with type 2 diabetes and hypomagnesemia. Twenty-three elderly patients with type 2 diabetes and hypomagnesemia were enrolled and randomly allocated to receive either 50 mL of MgCl2 5% solution equivalent to 450 mg of elemental magnesium or Imipramine 50 mg daily during 12 weeks. Widowhood or divorce in the last six months, alcoholism, degenerative illnesses of the nervous central system, recent diagnosis of diabetes, previous or current treatment with antidepressants, chronic diarrhea, use of diuretics, and reduced renal function were exclusion criteria. Hypomagnesemia was defined by serum magnesium levels < 1.8 mg/dL and depression by Yasavage and Brink score > or = 11 points. The primary trial end point was the improvement of depression symptoms. At baseline, there were no differences by age (69 +/- 5.9 and 66.4 +/- 6.1 years, p = 0.39), duration of diabetes (11.8 +/- 7.9 and 8.6 +/- 5.7 years, p = 0.33), serum magnesium levels (1.3 +/- 0.04 and 1.4 +/- 0.04 mg/dL, p = 0.09), and Yasavage and Brink Score (17.9 +/- 3.9 and 16.1 +/- 4.5 point, p = 0.34) in the groups with MgCl2 and imipramine, respectively. At end of follow-up, there were no significant differences in the Yasavage and Brink score (11.4 +/- 3.8 and 10.9 +/- 4.3, p = 0.27) between the groups in study; whereas serum magnesium levels were significantly higher in the group with MgCl2 (2.1 +/- 0.08 mg/dL) than in the subjects with imipramine (1.5 +/- 0.07 mg/dL), p < 0.0005. In conclusion, MgCl2 is as effective in the treatment of depressed elderly type 2 diabetics with hypomagnesemia as imipramine 50 mg daily.
Control of cable actuated devices using smooth backlash inverse
Cable conduit actuation provides a simple yet dexterous mode of power transmission for remote actuation. However, they are not preferred because of the nonlinearities arising from friction and cable compliance which lead to backlash type of behavior. Unlike most of the current research in backlash control which generally assumes no knowledge of one of the intermediate states, the controller design in this case can be significantly simplified if output feedback of the system is available. This paper uses a simple feedforward control law for backlash compensation. A novel smooth backlash inverse is proposed, which takes the physical limitations of the actuator in consideration, unlike other designs, and thus makes it more intuitive to use. Implementation of this inverse on physical systems can also improve the system performance over the theoretical exact inverse, as well as other existing smooth inverse designs. Improvement in the performance is shown through experiments on a robot arm of Laprotek surgical system as well as on an experimental setup using polymeric cables for actuation.
Methods of the Water-Energy-Food Nexus
This paper focuses on a collection of methods that can be used to analyze the water-energy-food (WEF) nexus. We classify these methods as qualitative or quantitative for interdisciplinary and transdisciplinary research approaches. The methods for interdisciplinary research approaches can be used to unify a collection of related variables, visualize the research problem, evaluate the issue, and simulate the system of interest. Qualitative methods are generally used to describe the nexus in the region of interest, and include primary research methods such as Questionnaire Surveys, as well as secondary research methods such as Ontology Engineering and Integrated Maps. Quantitative methods for examining the nexus include Physical Models, Benefit-Cost Analysis (BCA), OPEN ACCESS
Design and simulation of a 1 to 14 GHz broadband electromagnetic compatibility DRGH antenna
This work presents the design and simulation of a 1-14 GHz double ridged guide horn antenna (DRGH) with coaxial input feed section. This antenna, due to the large frequency bands required by standards, is suitable to be used in electromagnetic compatibility (EMC) testing. The horn-antenna model analyzed dates back to the early 1970s when J. L. Kerr suggested the use of a feed horn launcher whose dimensions where found experimentally. Although this type of horn has become the preferred test antenna of EMC testing for the 1-18 GHz range, which has been widely used for over four decades, no explanation of the effect of the launcher dimensions and the ridges shape into the flared section on the antenna parameters was found in the open literature. To investigate this phenomenon in detail, the entire horn has been modeled, including the coaxial feed using a time domain method. The simulations indicate that deficiencies in the radiation pattern start to appear at frequencies above 6 GHz and it starts to split at frequencies above 12 GHz. In spite those problems, the designed antenna can work well up to 14 GHz.
Natural products in drug discovery.
Natural products have been the single most productive source of leads for the development of drugs. Over a 100 new products are in clinical development, particularly as anti-cancer agents and anti-infectives. Application of molecular biological techniques is increasing the availability of novel compounds that can be conveniently produced in bacteria or yeasts, and combinatorial chemistry approaches are being based on natural product scaffolds to create screening libraries that closely resemble drug-like compounds. Various screening approaches are being developed to improve the ease with which natural products can be used in drug discovery campaigns, and data mining and virtual screening techniques are also being applied to databases of natural products. It is hoped that the more efficient and effective application of natural products will improve the drug discovery process.
Severe Fusobacteria infections (Lemierre syndrome) in two boys
Abscess formation is a rare cause of febrile illness in childhood but always has to be considered in such clinical presentations. Belonging to the resident flora of the oropharyngeal region, Fusobacteria are known to cause local infections; from here they may extend to other sites via the bloodstream or are aspirated into the lung (Lemierre disease). We report on two boys with Lemierre disease due to infection by Fusobacteria in monoculture causing two different clinical phenotypes. Case 1 presented with a large subphrenic abscess and pneumonic infiltration of the right middle lobe. Primary focus of infection was periodontal disease. Case 2 presented with a life-threatening septicaemia due to a retropharyngeal abscess and perforated otitis media followed by osteomyelitis of the atlas and thrombosis of the left sigmoid sinus and internal jugular vein. Conclusion: Fusobacteria should be considered in any abscess formation in children. A thorough examination of the oropharyngeal region as a possible site of primary manifestation is mandatory.
Deep Visual Domain Adaptation: A Survey
Deep domain adaptation has emerged as a new learning technique to address the lack of massive amounts of labeled data. Compared to conventional methods, which learn shared feature subspaces or reuse important source instances with shallow representations, deep domain adaptation methods leverage deep networks to learn more transferable representations by embedding domain adaptation in the pipeline of deep learning. There have been comprehensive surveys for shallow domain adaptation, but few timely reviews the emerging deep learning based methods. In this paper, we provide a comprehensive survey of deep domain adaptation methods for computer vision applications with four major contributions. First, we present a taxonomy of different deep domain adaptation scenarios according to the properties of data that define how two domains are diverged. Second, we summarize deep domain adaptation approaches into several categories based on training loss, and analyze and compare briefly the state-of-the-art methods under these categories. Third, we overview the computer vision applications that go beyond image classification, such as face recognition, semantic segmentation and object detection. Fourth, some potential deficiencies of current methods and several future directions are highlighted.
Scheduling and Priority Mapping for Static Real-Time Middleware
This paperpresents a middleware real-time scheduling technique for static,distributed, real-time applications. The technique uses globaldeadline monotonic priority assignment to clients and the DistributedPriority Ceiling protocol to provide concurrency control andpriorities for server execution. The paper presents a new algorithmfor mapping the potentially large number of unique global prioritiesrequired by this scheduling technique to the restricted set ofpriorities provided by commercial real-time operating systems.This algorithm is called Lowest Overlap First Priority Mapping;we prove that it is optimal among direct priority mapping algorithms.This paper also presents the implementation of these real-timemiddleware scheduling techniques in a Scheduling Service thatmeets the interface proposed for such a service in the Real-TimeCORBA 1.0 standard. Our prototype Scheduling Service is integratedwith the commercial PERTS tool that provides schedulability analysisand automated generation of global and local priorities for clientsand servers.
Технологія управління якістю підготовки майбутніх інженерів-педагогів в умовах магістратури технічного університету
Зміни у розвитку українського суспільства, освіти і науки зумовили значне зростання нової інформації, прискорили інтеграційні і комунікативні процеси. Ці тенденції є особливо характерними для освіти, у якій протягом останніх десятиліть від­буваються принципові зміни у методології, змісті, методичному забезпеченні тощо. Попри їх значущість, вони все-таки зосереджені на проблемах управління якістю підготовки
MUSCLE: authenticated external data retrieval from multiple sources for smart contracts
Smart contracts are applications that are deployed and executed on a blockchain's decentralised infrastructure. Many smart contract applications rely on data that resides outside the blockchain. However, while traditional web applications can communicate with trustworthy data sources directly through the Internet, this is not possible for smart contracts because their execution must be deterministic. Bringing external data into the blockchain has been a topic of research since the first introduction of Ethereum. A system that can provide this data to smart contracts is called an oracle. The primary requirement in designing oracles is that the authenticity of the data must be publicly verifiable, which can be achieved through signatures. However, transmitting data to the blockchain and performing the verification is costly, especially if applications require data from multiple sources. In that case, current approaches would need to retrieve the data from each source separately. In this paper, we present the concept of MUlti-Source oraCLE (MUSCLE) for retrieving data from multiple sources, which we believe to be the first to focus on the multi-source scenario. We implement five variants of MUSCLE, each using a different signature or aggregate signature scheme and compare their performance with two oracles that are based on TLS-N, which represents the current state of the art. Our results show that the ECDSA-based MUSCLE features the lowest total gas expenditure, while the BGLS-based oracle provides lower transaction and storage costs.
Reliability and clinical significance of mobility and balance assessments in multiple sclerosis.
The aim of the study was to establish the test-retest reliability, clinical significance and precision of four mobility and balance measures - the Timed 25-Foot Walk, Six-minute Walk, Timed Up and Go and the Berg Balance Scale - in individuals moderately affected by multiple sclerosis. Twenty four participants with multiple sclerosis (Extended Disability Status Score 5-6.5) were assessed on four measures of mobility and balance. The Timed 25-Foot Walk, Six-minute Walk and Timed Up and Go mobility outcome measures and the Berg Balance Scale were assessed by one assessor one week apart. Intraclass correlation coefficient (ICC) analysis was carried out to determine reliability. Minimal detectable change values were calculated to determine clinical significance; the standard error of each measurement was calculated to assess precision. All four outcome measures were found to be reliable: Timed 25-Foot Walk ICC=0.94, Six-minute Walk Test ICC=0.96, Timed Up and Go ICC=0.97 and Berg Balance Scale ICC=0.96. Minimal detectable change values were as follows: Timed 25-Foot Walk=12.6 s, Six-minute Walk Test=76.2 m, Timed Up and Go=10.6 s and Berg Balance Scale=7 points. Standard errors of measurement were as follows: Timed 25-Foot Walk=4.56 s, Six-minute Walk Test=27.48 m, Timed Up and Go=3.81 s and Berg Balance Scale=3 points. The test-retest reliability of these four outcome measures was found to be good. The calculated clinical significance and precision of these measures highlight the problems of assessing a heterogeneous clinical population.
Application identification via network traffic classification
Recent developments in Internet technology have led to an increased importance of network traffic classification. In this study, we used machine-learning methods for the identification of applications using network traffic classification. Contrary to existing studies, which classify applications into categories like FTP, Instant Messaging, etc., we tried to identify popular end-user applications such as Facebook, Twitter, Skype and many more individually. We are motivated by the fact that individual identification of applications is of high importance for network security, QoS enforcement, and trend analysis. For our tests, we used UNB ISCX Network Traffic dataset and our internal dataset, consisting of 14 and 13 well-known applications respectively. In our experiments, we evaluated four classification algorithms, namely J48, Random Forest, k-NN, and Bayes Net. With the complete set of 111 features, k-NN gave the best result for the ISCX Dataset as 93.94% of accuracy using the value of k as 1, and Random Forest gave the best result for the internal dataset as 90.87% of accuracy. During the course of this study, the initial numbers of features were successfully reduced to two sets of 12 features specific to each dataset without a compromise to the success. Moreover, we observed a 2% increase in the success rate for the internal dataset. We believe that individual application identification by applying machine-learning methods is a viable solution and currently we are investigating a two-tier approach to make it more resilient to in category confusion.
Smart Homecare System for Health Tele-monitoring
An increasing aged population worldwide puts our medical capabilities to the test. Research and commercial groups are investigating novel ways to care for the aged and chronically ill both in their own homes and in care facilities. This paper describes a prototype we have developed for remote healthcare monitoring. This personalized smart homecare system uses smart phones, wireless sensors, Web servers and IP Webcams. To illustrate the functionality of the prototype we describe a series of typical tele-health monitoring scenarios.
Skull stripping of MRI brain images using mathematical morphology
Skull stripping is a major phase in MRI brain imaging applications and it refers to the removal of its non-cerebral tissues. The main problem in skull-stripping is the segmentation of the non-cerebral and the intracranial tissues due to their homogeneity intensities. As morphology requires prior binarization of the image, this paper proposed mathematical morphology segmentation using double and Otsu's thresholding. The purpose is to identify robust threshold values to remove the non-cerebral tissue from MRI brain images. Ninety collected samples of T1-weighted, T2-weighted and FLAIR MRI brain images are used in the experiments. The results showed promising use of double threholding as a robust threshold value in handling intensity inhomogeneities compared to Otsu's thresholding.
Beneficial cardiometabolic actions of telmisartan plus amlodipine therapy in elderly patients with poorly controlled hypertension.
BACKGROUND There is a growing body of evidence that blood pressure (BP) level is one of the major determinants of cardiovascular morbidity and mortality in individuals, including elderly people. However, to achieve a target BP level in the elderly is more difficult compared with patients aged <65 years. Current guidelines recommend combination drug therapy with different modes of action for the treatment of elderly patients with moderate hypertension (HT). However, the optimal combination regimen is not well established in elderly HT. HYPOTHESIS We hypothesized that combination therapy of telmisartan plus amlodipine would exert favorable cardiometabolic actions in elderly HT. METHODS Seventeen elderly patients with essential HT who failed to achieve a target home BP level with treatment of 5 mg amlodipine plus 80 mg valsartan or 8 mg candesartan for at least 2 months were enrolled. Then the patients were assigned to replace their valsartan or candesartan with 40 mg telmisartan. The subjects were instructed to measure their own BP at home every day during the study periods. RESULTS Replacement of valsartan or candesartan by telmisartan in amlodipine-treated elderly hypertensive patients showed a significant reduction in morning home systolic BP and evening home systolic and diastolic BP at 12 weeks. Switching to telmisartan significantly increased serum adiponectin level. CONCLUSIONS Our present study suggests that combination therapy with telmisartan plus amlodipine may exert more beneficial cardiometabolic effects in elderly patients with HT compared with valsartan or candesartan plus amlodipine treatment.
Fundamental Tradeoff in Knowledge Representation and Reasoning ( Revised Versionl )
A fundamental computational limit on automated reasoning and its effect on Knowledge Representation is examined. Basically, the problem is that it can be more difficult to reason correctly ;Nith one representationallanguage than with another and, moreover, that this difficulty increases dramatically as the expressive power of the language increases. This leads to a tradeoff between the expressiveness of a representational language and its computational tractability. Here we show that this tradeoff can be seen to underlie the differences among a number of existing representational formalisms, in addition to motivating many of the current research issues in Knowledge Representation.
MobiFace: A Lightweight Deep Learning Face Recognition on Mobile Devices
Deep neural networks have been widely used in numerous computer vision applications, particularly in face recognition. However, deploying deep neural network face recognition on mobile devices is still limited since most highaccuracy deep models are both time and GPU consumption in the inference stage. Therefore, developing a lightweight deep neural network is one of the most promising solutions to deploy face recognition on mobile devices. Such the lightweight deep neural network requires efficient memory with small number of weights representation and low cost operators. In this paper a novel deep neural network named MobiFace, which is simple but effective, is proposed for productively deploying face recognition on mobile devices. The experimental results have shown that our lightweight MobiFace is able to achieve high performance with 99.7% on LFW database and 91.3% on large-scale challenging Megaface database. It is also eventually competitive against large-scale deep-networks face recognition while significant reducing computational time and memory consumption.
The G277S transferrin mutation does not affect iron absorption in iron deficient women.
BACKGROUND Iron deficiency anaemia is one of the most important nutritional diseases, with high prevalence worldwide. The G277S transferrin mutation has been implicated as a risk factor for iron deficiency in menstruating women. However, the subject is controversial and there are no data concerning the possible influence of this polymorphism on iron absorption. AIM OF THE STUDY To undertake a pilot study to investigate the effect of carrying the G277S transferrin mutation on non-haem iron absorption from a meal in young menstruating women compared to wild-type controls. METHODS Menstruating women with low iron stores (serum ferritin < 30 microg/l) or who had suffered from iron deficiency anaemia or had a family history of anaemia were recruited (n = 162). Haematological parameters were analysed, including haemoglobin, ferritin, total-iron binding capacity and transferrin saturation. Non-haem iron absorption from a meal was measured in 25 non-anaemic women either with the G277S/G277G (n = 10) or the wild type G277G/G277G (n = 15) genotype. The incorporation of stable isotopes of iron into erythrocytes was used to measure absorption. RESULTS AND CONCLUSIONS There were no significant differences in iron status indices or non-haem iron absorption between genotypes. However, G277S carriers did not show the usual inverse association between iron stores and non-haem iron absorption. Further studies should focus on the effects of a combination of polymorphisms in iron metabolism genes on iron absorption.
Advanced PROPHET Routing in Delay Tolerant Network
To solve routing jitter problem in PROPHET in delay tolerant network, advanced PROPHET routing is proposed in this paper. Average delivery predictabilities are used in advanced PROPHET to avoid routing jitter. Furthermore, we evaluate it through simulations versus PROPHET routing protocol. The experimental results show there has higher average delivery rates and shorter average delays in advanced PROPHET. Thus, it is fair to say that advanced PROPHET gives better performance than PROPHET.
Assessing the Effectiveness of Neurofeedback Training in the Context of Clinical and Social Neuroscience
Social neuroscience benefits from the experimental manipulation of neuronal activity. One possible manipulation, neurofeedback, is an operant conditioning-based technique in which individuals sense, interact with, and manage their own physiological and mental states. Neurofeedback has been applied to a wide variety of psychiatric illnesses, as well as to treat sub-clinical symptoms, and even to enhance performance in healthy populations. Despite growing interest, there persists a level of distrust and/or bias in the medical and research communities in the USA toward neurofeedback and other functional interventions. As a result, neurofeedback has been largely ignored, or disregarded within social neuroscience. We propose a systematic, empirically-based approach for assessing the effectiveness, and utility of neurofeedback. To that end, we use the term perturbative physiologic plasticity to suggest that biological systems function as an integrated whole that can be perturbed and guided, either directly or indirectly, into different physiological states. When the intention is to normalize the system, e.g., via neurofeedback, we describe it as self-directed neuroplasticity, whose outcome is persistent functional, structural, and behavioral changes. We argue that changes in physiological, neuropsychological, behavioral, interpersonal, and societal functioning following neurofeedback can serve as objective indices and as the metrics necessary for assessing levels of efficacy. In this chapter, we examine the effects of neurofeedback on functional connectivity in a few clinical disorders as case studies for this approach. We believe this broader perspective will open new avenues of investigation, especially within social neuroscience, to further elucidate the mechanisms and effectiveness of these types of interventions, and their relevance to basic research.
A randomized phase III trial of adjuvant chemotherapy with irinotecan, leucovorin and fluorouracil versus leucovorin and fluorouracil for stage II and III colon cancer: A Hellenic Cooperative Oncology Group study
BACKGROUND Colon cancer is a public health problem worldwide. Adjuvant chemotherapy after surgical resection for stage III colon cancer has been shown to improve both progression-free and overall survival, and is currently recommended as standard therapy. However, its value for patients with stage II disease remains controversial. When this study was designed 5-fluorouracil (5FU) plus leucovorin (LV) was standard adjuvant treatment for colon cancer. Irinotecan (CPT-11) is a topoisomerase I inhibitor with activity in metastatic disease. In this multicenter adjuvant phase III trial, we evaluated the addition of irinotecan to weekly 5FU plus LV in patients with stage II or III colon cancer. METHODS The study included 873 eligible patients. The treatment consisted of weekly administration of irinotecan 80 mg/m2 intravenously (i.v.), LV 200 mg/m2 and 5FU 450 mg/m2 bolus (Arm A) versus LV 200 mg/m2 and 5FU 500 mg/m2 i.v. bolus (Arm B). In Arm A, treatments were administered weekly for four consecutive weeks, followed by a two-week rest, for a total of six cycles, while in Arm B treatments were administered weekly for six consecutive weeks, followed by a two-week rest, for a total of four cycles. The primary end-point was disease-free survival (DFS) at three years. RESULTS The probability of overall survival (OS) at three years was 0.88 for patients in Arm A and 0.86 for those in Arm B, while the five-year OS probability was 0.78 and 0.76 for patients in Arm A and Arm B, respectively (P = 0.436). Furthermore, the probability of DFS at three years was 0.78 and 0.76 for patients in Arm A and Arm B, respectively (P = 0.334). With the exception of leucopenia and neutropenia, which were higher in patients in Arm A, there were no significant differences in Grades 3 and 4 toxicities between the two regimens. The most frequently recorded Grade 3/4 toxicity was diarrhea in both treatment arms. CONCLUSIONS Irinotecan added to weekly bolus 5FU plus LV did not result in improvement in disease-free or overall survival in stage II or III colon cancer, but did increase toxicity. TRIAL REGISTRATION Australian New Zealand Clinical Trials Registry: ACTRN12610000148077.
Decision support in intermodal transport: A new research agenda
This paper proposes new research themes concerning decision support in intermodal transport. Decision support models have been constructed for private stakeholders (e.g. network operators, drayage operators, terminal operators or intermodal operators) as well as for public actors such as policy makers and port authorities. Intermodal research topics include policy support, terminal network design, intermodal service network design, intermodal routing, drayage operations and ICT innovations. For each research topic, the current state of the art and gaps in existing models are identified. Current trends in intermodal decision support models include the introduction of environmental concerns, the development of dynamic models and the growth in innovative applications of Operations Research techniques. Limited data availability and problem size (network scale) and related computational considerations are issues which increase the complexity of decision support in intermodal transport. 2012 Elsevier B.V. All rights reserved.
Consumer Demand for Cynical and Negative News Frames ∗
Commentators regularly lament the proliferation of both negative and/or strategic (“horserace”) coverage in political news content. The most frequent account for this trends focuses on news norms, and/or the priorities of news journalists. Here, we build on recent work arguing for the importance of demand-side, rather than supply-side, explanations of news content. In short, news may be negative and/or strategy-focused because that is the kind of news that people are interested in. We use a lab experiment to capture participants’ news selection biases, alongside a survey capturing their stated news preferences. Politically-interested participants are more likely to select negative stories. Interest is associated with a greater preference for strategic frames as well. And results suggest that behavioral results do not conform to attitudinal ones. That is, regardless of what participants say, they exhibit a preference for negative news content. Literature on political communication often finds itself concerned with two related themes in media content: (1) negative news frames that generally cast politicians and politics in an unfavourable light, and (2) cynical strategy coverage that focuses on the “horserace” and conflictual aspects of politics. The two themes may be related, insofar as strategic coverage implies that politicians are motivated only by power, not the common good (e.g. Capella and Jamieson 1997). Regardless of their relation, the body of work on these frames makes two assumptions: first, that they are bad for society; and second, that their root cause lies in the actions of journalists. ∗Paper prepared for delivery at the Annual Conference of the Political Science Association, June 2013, Victoria BC. †Email: [email protected] ‡Email: [email protected]. 1 We rely here on Capella and Jamieson’s (1997) definition of strategy coverage: “(1) winning and losing as the central concern; (2) the language of wars, games, and competition; (3) a story with performers, critics and audience (voters); (4) centrality of performance, style, and perception of the candidate; (5) heavy weighting of polls and the candidates” (31). In this way it includes both the “game” schema and “horserace” coverage which often become muddled in the literature. We seek here to question the second assumption through a simple supposition: that the content of any given media environment, both on the personal and systemic level, is determined by some interplay between what media sources supply, and what consumers demand. Instead of looking at particular processes and norms inherent in the news-making process which may generate these themes (Sabato 1991; Patterson 1994; Lichter and Noyes 1995; Farnsworth and Lichter 2007), we instead focus on the additional role that demand plays in their provision. Put simply, we argue that the proliferation of negative and/or strategic content is at least in part a function of individuals’ (quite possibly subconscious) preferences. This is to our knowledge the first exploration of news selection biases outside the US context, and/or outside the context of an election campaign. It is in part an extension of existing work focused on consumer interest in horserace stories (e.g., Iyengar et al. 2004), or in negative content (e.g., Meffert et al. 2006), although it is the first to simultaneously consider both. It does so using a new lab-experimental approach that we believe has some advantages where both internal and external validity are concerned. It also provides a rare opportunity to compare actual news selection behavior with answers to survey questions about participants’ preferences in media content. We find, in sum, that individuals tend to select negative and strategic news frames, even when other options are available, and, moreover, even when their own stated preferences are for news that is less negative and/or strategic. Results thus support past work suggesting that participants are more likely to select negative stories rather than positive ones, though we find that this is particularly true for strategic stories. We also find evidence, in line with past work, that participants expressing high levels of political interest show a greater attraction to strategic stories. (This is true for citizens versus non-citizens as well.) Our own interpretation of these results draws on work in psychology, biology, economics, and political science on the “negativity-bias.” But even a thin reading of our findings emphasizes a too-often overlooked aspect of new content: it is the way it is not just because of the nature of the supply of news, but also the demand. The Cynical Media and their Audience That the media are negative and cynical about politics and politicians is widely agreed upon in the literature. (For a recent review see Soroka 2012.) Most scholars see this trend as a product, or perhaps a mutation, of the media’s role as the watchdog “Fourth Estate.” Patterson (1994: 79) argues that journalist’s understanding of what this role entails has evolved in a way that has caused them to shift from “silent skeptics” to “vocal cynics.” Indeed, the great deal of literature surrounding For a useful distinction of demandversus supply-side accounts of media content, see (Andrew 2007).
Language Shapes Thought : Rethinking on Linguistic Relativity
We reviewed the researches on linguistic relativity in color, space and time domains, and rethink the relationship between language and perception to support the idea that language interact with other cognitive processes and shape the perception and thought of human. Language, perception and action are not separated systems, but are closely interconnected and highly interacted to some extent. And language plays a constructive role in object affordances, which do not only require low-level processes of action and motor control, but depend on the language
Glioblastoma Cancer Stem Cells Evade Innate Immune Suppression of Self-Renewal through Reduced TLR4 Expression.
Tumors contain hostile inflammatory signals generated by aberrant proliferation, necrosis, and hypoxia. These signals are sensed and acted upon acutely by the Toll-like receptors (TLRs) to halt proliferation and activate an immune response. Despite the presence of TLR ligands within the microenvironment, tumors progress, and the mechanisms that permit this growth remain largely unknown. We report that self-renewing cancer stem cells (CSCs) in glioblastoma have low TLR4 expression that allows them to survive by disregarding inflammatory signals. Non-CSCs express high levels of TLR4 and respond to ligands. TLR4 signaling suppresses CSC properties by reducing retinoblastoma binding protein 5 (RBBP5), which is elevated in CSCs. RBBP5 activates core stem cell transcription factors, is necessary and sufficient for self-renewal, and is suppressed by TLR4 overexpression in CSCs. Our findings provide a mechanism through which CSCs persist in hostile environments because of an inability to respond to inflammatory signals.
THE FARCICAL ELEMENTS IN SELECTED COMEDIES OF MOLIERE
THE FARCICAL ELEMENTS IN SELECTED COMEDIES OF MOLIERE Farce belongs in a general sense to the larger field of comedy and should not be confused with satire ( as a genre) or high comedy, which often involves true character study. During the presentation of a farce the audience does not identify itself with a particular person and feel pity for him. A farce is intended to be funny and may often be "stuffed" with patois or artificial jargons and base
A Generative Model of People in Clothing
We present the first image-based generative model of people in clothing for the full body. We sidestep the commonly used complex graphics rendering pipeline and the need for high-quality 3D scans of dressed people. Instead, we learn generative models from a large image database. The main challenge is to cope with the high variance in human pose, shape and appearance. For this reason, pure image-based approaches have not been considered so far. We show that this challenge can be overcome by splitting the generating process in two parts. First, we learn to generate a semantic segmentation of the body and clothing. Second, we learn a conditional model on the resulting segments that creates realistic images. The full model is differentiable and can be conditioned on pose, shape or color. The result are samples of people in different clothing items and styles. The proposed model can generate entirely new people with realistic clothing. In several experiments we present encouraging results that suggest an entirely data-driven approach to people generation is possible.
Personal-professional connections in palliative care occupational therapy.
This qualitative study examined the experiences of occupational therapists working in palliative care. Multiple qualitative interviews were conducted with each of eight occupational therapists working with people who are terminally ill. The interviews were transcribed verbatim and analyzed for recurring and unique themes. Five themes emerged: satisfaction, hardship, coping, spirituality, and growth. Common themes, while resonating through all participants' stories, were experienced in a uniquely personal way by each participant. The result was the discovery of an individualized "personal-professional connection" for each participant. The exploration of personal-professional connections can contribute to the understanding of occupational therapy practice in palliative care. Furthermore, these individual stories may resonate for other occupational therapists and inspire personal and professional reflection; validation of feelings and issues can arise from parallel comparisons. Therapists may in tum gain insight into the relationship between their own personal and professional experiences.
Does thyroid surgery for Graves’ disease improve health-related quality of life?
Graves’ disease can induce alterations of the psychosocial well-being that negatively influence the overall well-being of patients. Among the current treatments, surgery has limited indications, and its impact on the health-related quality of life has not been well clarified. The aim of this study was to assess the impact of surgery on the quality of life. Fifty-seven patients who underwent total thyroidectomy for Graves’ disease in our surgical unit between April 2002 and December 2009 were administered a questionnaire concerning four issues: organic alterations and clinical manifestations, neurovegetative system disturbances, impairment of daily activities, psychosocial problems. Patients were retrospectively questioned after thyroidectomy about the presence of these symptoms in both the pre and postoperative periods. There was a significant improvement after surgery in all four areas. Organic manifestations and psychosocial problems had higher average improvements, as did some aspects of the neurovegetative system and difficulties in undertaking daily activities. There were no reports of a worsening of symptoms. Surgery resolved the hyperthyroidism in 100 % of cases, and was associated with a quality of life improvement of about 70 % in the patients. Surgery can therefore provide an immediate and effective resolution of Graves’ disease, with benefits in health-related quality of life.
MINING FOR FORECASTING SALES
The growing popularity of online product review forums invites people to express opinions and sentiments toward the products .It gives the knowledge about the product as well as sentiment of people towards the product. These online reviews are very important for forecasting the sales performance of product. In this paper, we discuss the online review mining techniques in movie domain. Sentiment PLSA which is responsible for finding hidden sentiment factors in the reviews and ARSA model used to predict sales performance. An Autoregressive Sentiment and Quality Aware model (ARSQA) also in consideration for to build the quality for predicting sales performance. We propose clustering and classification based algorithm for sentiment analysis.
Exposure to political conflict and violence and posttraumatic stress in Middle East youth: protective factors.
We examine the role of family- and individual-level protective factors in the relation between exposure to ethnic-political conflict and violence and posttraumatic stress among Israeli and Palestinian youth. Specifically, we examine whether parental mental health (lack of depression), positive parenting, children's self-esteem, and academic achievement moderate the relation between exposure to ethnic-political conflict/violence and subsequent posttraumatic stress (PTS) symptoms. We collected three waves of data from 901 Israeli and 600 Palestinian youths (three age cohorts: 8, 11, and 14 years old; approximately half of each gender) and their parents at 1-year intervals. Greater cumulative exposure to ethnic-political conflict/violence across the first 2 waves of the study predicted higher subsequent PTS symptoms even when we controlled for the child's initial level of PTS symptoms. This relation was significantly moderated by a youth's self-esteem and by the positive parenting received by the youth. In particular, the longitudinal relation between exposure to violence and subsequent PTS symptoms was significant for low self-esteem youth and for youth receiving little positive parenting but was non-significant for children with high levels of these protective resources. Our findings show that youth most vulnerable to PTS symptoms as a result of exposure to ethnic-political violence are those with lower levels of self-esteem and who experience low levels of positive parenting. Interventions for war-exposed youth should test whether boosting self-esteem and positive parenting might reduce subsequent levels of PTS symptoms.
Different Stimuli, Different Spatial Codes: A Visual Map and an Auditory Rate Code for Oculomotor Space in the Primate Superior Colliculus
Maps are a mainstay of visual, somatosensory, and motor coding in many species. However, auditory maps of space have not been reported in the primate brain. Instead, recent studies have suggested that sound location may be encoded via broadly responsive neurons whose firing rates vary roughly proportionately with sound azimuth. Within frontal space, maps and such rate codes involve different response patterns at the level of individual neurons. Maps consist of neurons exhibiting circumscribed receptive fields, whereas rate codes involve open-ended response patterns that peak in the periphery. This coding format discrepancy therefore poses a potential problem for brain regions responsible for representing both visual and auditory information. Here, we investigated the coding of auditory space in the primate superior colliculus(SC), a structure known to contain visual and oculomotor maps for guiding saccades. We report that, for visual stimuli, neurons showed circumscribed receptive fields consistent with a map, but for auditory stimuli, they had open-ended response patterns consistent with a rate or level-of-activity code for location. The discrepant response patterns were not segregated into different neural populations but occurred in the same neurons. We show that a read-out algorithm in which the site and level of SC activity both contribute to the computation of stimulus location is successful at evaluating the discrepant visual and auditory codes, and can account for subtle but systematic differences in the accuracy of auditory compared to visual saccades. This suggests that a given population of neurons can use different codes to support appropriate multimodal behavior.
Large-Scale Mobile Traffic Analysis: A Survey
This article surveys the literature on analyses of mobile traffic collected by operators within their network infrastructure. This is a recently emerged research field, and, apart from a few outliers, relevant works cover the period from 2005 to date, with a sensible densification over the last three years. We provide a thorough review of the multidisciplinary activities that rely on mobile traffic datasets, identifying major categories and sub-categories in the literature, so as to outline a hierarchical classification of research lines. When detailing the works pertaining to each class, we balance a comprehensive view of state-of-the-art results with punctual focuses on the methodological aspects. Our approach provides a complete introductory guide to the research based on mobile traffic analysis. It allows summarizing the main findings of the current state-of-the-art, as well as pinpointing important open research directions.
Diagnosis of Diabetic Retinopathy Using Machine Learning Techniques
Diabetic retinopathy (DR) is an eye disease caused by the complication of diabetes and we should detect it early for effective treatment. As diabetes progresses, the vision of a patient may start to deteriorate and lead to diabetic retinopathy. As a result, two groups were identified, namely non-proliferative diabetic retinopathy (NPDR) and proliferative diabetic retinopathy (PDR). In this paper, to diagnose diabetic retinopathy, three models like Probabilistic Neural network (PNN), Bayesian Classification and Support vector machine (SVM) are described and their performances are compared. The amount of the disease spread in the retina can be identified by extracting the features of the retina. The features like blood vessels, haemmoraghes of NPDR image and exudates of PDR image are extracted from the raw images using the image processing techniques and fed to the classifier for classification. A total of 350 fundus images were used, out of which 100 were used for training and 250 images were used for testing. Experimental results show that PNN has an accuracy of 89.6 % Bayes Classifier has an accuracy of 94.4% and SVM has an accuracy of 97.6%. This infers that the SVM model outperforms all other models. Also our system is also run on 130 images available from “DIARETDB0: Evaluation Database and Methodology for Diabetic Retinopathy” and the results show that PNN has an accuracy of 87.69% Bayes Classifier has an accuracy of 90.76% and SVM has an accuracy of 95.38%.
Image matching using local symmetry features
We present a new technique for extracting local features from images of architectural scenes, based on detecting and representing local symmetries. These new features are motivated by the fact that local symmetries, at different scales, are a fundamental characteristic of many urban images, and are potentially more invariant to large appearance changes than lower-level features such as SIFT. Hence, we apply these features to the problem of matching challenging pairs of photos of urban scenes. Our features are based on simple measures of local bilateral and rotational symmetries computed using local image operations. These measures are used both for feature detection and for computing descriptors. We demonstrate our method on a challenging new dataset containing image pairs exhibiting a range of dramatic variations in lighting, age, and rendering style, and show that our features can improve matching performance for this difficult task.
Can Collective Sentiment Expressed on Twitter Predict Political Elections?
Research examining the predictive power of social media (especially Twitter) displays conflicting results, particularly in the domain of political elections. This paper applies methods used in studies that have shown a direct correlation between volume/sentiment of Twitter chatter and future electoral results in a new dataset about political elections. We show that these methods display a series of shortcomings, that make them inadequate for determining whether social media messages can predict the outcome of elections.
SSH: Single Stage Headless Face Detector
We introduce the Single Stage Headless (SSH) face detector. Unlike two stage proposal-classification detectors, SSH detects faces in a single stage directly from the early convolutional layers in a classification network. SSH is headless. That is, it is able to achieve state-of-the-art results while removing the “head” of its underlying classification network – i.e. all fully connected layers in the VGG-16 which contains a large number of parameters. Additionally, instead of relying on an image pyramid to detect faces with various scales, SSH is scale-invariant by design. We simultaneously detect faces with different scales in a single forward pass of the network, but from different layers. These properties make SSH fast and light-weight. Surprisingly, with a headless VGG-16, SSH beats the ResNet-101-based state-of-the-art on the WIDER dataset. Even though, unlike the current state-of-the-art, SSH does not use an image pyramid and is 5X faster. Moreover, if an image pyramid is deployed, our light-weight network achieves state-of-the-art on all subsets of the WIDER dataset, improving the AP by 2.5%. SSH also reaches state-of-the-art results on the FDDB and Pascal-Faces datasets while using a small input size, leading to a speed of 50 frames/second on a GPU.
Semi-supervised Learning with Induced Word Senses for State of the Art Word Sense Disambiguation
Word Sense Disambiguation (WSD) aims to determine the meaning of a word in context, and successful approaches are known to benefit many applications in Natural Language Processing. Although, supervised learning has been shown to provide superior WSD performance, current sense-annotated corpora do not contain a sufficient number of instances per word type to train supervised systems for all words. While unsupervised techniques have been proposed to overcome this data sparsity problem, such techniques have not outperformed supervised methods. In this paper, we propose a new approach to building semi-supervised WSD systems that combines a small amount of sense-annotated data with information from Word Sense Induction, a fully-unsupervised technique that automatically learns the different senses of a word based on how it is used. In three experiments, we show how sense induction models may be effectively combined to ultimately produce high-performance semi-supervised WSD systems that exceed the performance of state-of-the-art supervised WSD techniques trained on the same sense-annotated data. We anticipate that our results and released software will also benefit evaluation practices for sense induction systems and those working in low-resource languages by demonstrating how to quickly produce accurate WSD systems with minimal annotation effort.
Grape leaf disease detection from color imagery using hybrid intelligent system
Vegetables and fruits are the most important export agricultural products of Thailand. In order to obtain more value-added products, a product quality control is essentially required. Many studies show that quality of agricultural products may be reduced from many causes. One of the most important factors of such quality is plant diseases. Consequently, minimizing plant diseases allows substantially improving quality of the products. This work presents automatic plant disease diagnosis using multiple artificial intelligent techniques. The system can diagnose plant leaf disease without maintaining any expertise once the system is trained. Mainly, the grape leaf disease is focused in this work. The proposed system consists of three main parts: (i) grape leaf color segmentation, (ii) grape leaf disease segmentation, and (iii) analysis & classification of diseases. The grape leaf color segmentation is pre-processing module which segments out any irrelevant background information. A self-organizing feature map together with a back-propagation neural network is deployed to recognize colors of grape leaf. This information is used to segment grape leaf pixels within the image. Then the grape leaf disease segmentation is performed using modified self-organizing feature map with genetic algorithms for optimization and support vector machines for classification. Finally, the resulting segmented image is filtered by Gabor wavelet which allows the system to analyze leaf disease color features more efficient. The support vector machines are then again applied to classify types of grape leaf diseases. The system can be able to categorize the image of grape leaf into three classes: scab disease, rust disease and no disease. The proposed system shows desirable results which can be further developed for any agricultural product analysis/inspection system.
Energy-Balance Control of PV Cascaded Multilevel Grid-Connected Inverters Under Level-Shifted and Phase-Shifted PWMs
This paper presents an energy-balance control strategy for a cascaded single-phase grid-connected H-bridge multilevel inverter linking n independent photovoltaic (PV) arrays to the grid. The control scheme is based on an energy-sampled data model of the PV system and enables the design of a voltage loop linear discrete controller for each array, ensuring the stability of the system for the whole range of PV array operating conditions. The control design is adapted to phase-shifted and level-shifted carrier pulsewidth modulations to share the control action among the cascade-connected bridges in order to concurrently synthesize a multilevel waveform and to keep each of the PV arrays at its maximum power operating point. Experimental results carried out on a seven-level inverter are included to validate the proposed approach.
Early exposure to children in family and day care as related to adult asthma and hay fever: results from the European Community Respiratory Health Survey.
BACKGROUND The literature indicates that early exposure to children in the family and to day care permanently influences the development of allergic disease. A study was undertaken to examine the associations of family size and day care with adult asthma and hay fever and to determine whether these associations are mediated through specific IgE production and whether they vary with allergic predisposition. METHODS 18,530 subjects aged 20-44 years from 36 areas predominantly in the market economies participated in the European Community Respiratory Health Survey and provided information through interviewer-led questionnaires. 13,932 subjects gave blood samples for measurement of specific IgE. RESULTS Hay fever was less common in subjects with many siblings (OR=0.92; 95% CI 0.90 to 0.95 per sib). There was a U-shaped relationship between asthma and number of siblings (quadratic effect of siblings, pwheeze=0.014, pFEV(1)=0.016). In subjects without siblings but exposed to children in day care, hay fever was less common (OR=0.76; 95% CI 0.60 to 0.98) and asthma symptoms were more common (ORwheeze=1.48; 95% CI 1.12 to 1.95). Adjustment for specific IgEs did not alter these associations. The inverse association of hay fever with siblings was found in sensitised subjects (OR=0.89; 95% CI 0.84 to 0.94) and in those with parental allergy (OR=0.91; 95% CI 0.85 to 0.97), but not in subjects without such a predisposition (OR=1.02; 95% CI 0.97 to 1.09). CONCLUSION Subjects exposed to many children at home or in day care experienced less hay fever and more asthma in adulthood. Microbial challenge through children may contribute to a non-allergic immunological development giving less hay fever but more airways infections predisposing to asthma. These effects were not mediated through production of specific IgE. The protective effect of siblings on hay fever was particularly strong in those with an allergic predisposition.
The Social Explanatory Styles Questionnaire: Assessing Moderators of Basic Social-Cognitive Phenomena Including Spontaneous Trait Inference, the Fundamental Attribution Error, and Moral Blame
Why is he poor? Why is she failing academically? Why is he so generous? Why is she so conscientious? Answers to such everyday questions--social explanations--have powerful effects on relationships at the interpersonal and societal levels. How do people select an explanation in particular cases? We suggest that, often, explanations are selected based on the individual's pre-existing general theories of social causality. More specifically, we suggest that over time individuals develop general beliefs regarding the causes of social events. We refer to these beliefs as social explanatory styles. Our goal in the present article is to offer and validate a measure of individual differences in social explanatory styles. Accordingly, we offer the Social Explanatory Styles Questionnaire (SESQ), which measures three independent dimensions of social explanatory style: Dispositionism, historicism, and controllability. Studies 1-3 examine basic psychometric properties of the SESQ and provide positive evidence regarding internal consistency, factor structure, and both convergent and divergent validity. Studies 4-6 examine predictive validity for each subscale: Does each explanatory dimension moderate an important phenomenon of social cognition? Results suggest that they do. In Study 4, we show that SESQ dispositionism moderates the tendency to make spontaneous trait inferences. In Study 5, we show that SESQ historicism moderates the tendency to commit the Fundamental Attribution Error. Finally, in Study 6 we show that SESQ controllability predicts polarization of moral blame judgments: Heightened blaming toward controllable stigmas (assimilation), and attenuated blaming toward uncontrollable stigmas (contrast). Decades of research suggest that explanatory style regarding the self is a powerful predictor of self-functioning. We think it is likely that social explanatory styles--perhaps comprising interactive combinations of the basic dimensions tapped by the SESQ--will be similarly potent predictors of social functioning. We hope the SESQ will be a useful tool for exploring that possibility.
Deep Patient Similarity Learning for Personalized Healthcare
Predicting patients’ risk of developing certain diseases is an important research topic in healthcare. Accurately identifying and ranking the similarity among patients based on their historical records is a key step in personalized healthcare. The electric health records (EHRs), which are irregularly sampled and have varied patient visit lengths, cannot be directly used to measure patient similarity due to the lack of an appropriate representation. Moreover, there needs an effective approach to measure patient similarity on EHRs. In this paper, we propose two novel deep similarity learning frameworks which simultaneously learn patient representations and measure pairwise similarity. We use a convolutional neural network (CNN) to capture local important information in EHRs and then feed the learned representation into triplet loss or softmax cross entropy loss. After training, we can obtain pairwise distances and similarity scores. Utilizing the similarity information, we then perform disease predictions and patient clustering. Experimental results show that CNN can better represent the longitudinal EHR sequences, and our proposed frameworks outperform state-of-the-art distance metric learning methods.
Controlled manipulation of mode splitting in an optical microcavity by two Rayleigh scatterers.
We report controlled manipulation of mode splitting in an optical microresonator coupled to two nanoprobes. It is demonstrated that, by controlling the positions of the nanoprobes, the split modes can be tuned simultaneously or individually and experience crossing or anti-crossing in frequency and linewidth. A tunable transition between standing wave mode and travelling wave mode is also observed. Underlying physics is discussed by developing a two-scatterer model which can be extended to multiple sscatterers. Observed rich dynamics and tunability of split modes in a single microresonator will find immediate applications in optical sensing, opto-mechanics, filters and will provide a platform to study strong light-matter interactions in two-mode cavities.
Deep Generative Adversarial Networks for Compressed Sensing Automates MRI
Magnetic resonance image (MRI) reconstruction is a severely ill-posed linear inverse task demanding time and resource intensive computations that can substantially trade off accuracy for speed in real-time imaging. In addition, state-of-the-art compressed sensing (CS) analytics are not cognizant of the image diagnostic quality. To cope with these challenges we put forth a novel CS framework that permeates benefits from generative adversarial networks (GAN) to train a (low-dimensional) manifold of diagnostic-quality MR images from historical patients. Leveraging a mixture of least-squares (LS) GANs and pixel-wise `1 cost, a deep residual network with skip connections is trained as the generator that learns to remove the aliasing artifacts by projecting onto the manifold. LSGAN learns the texture details, while `1 controls the high-frequency noise. A multilayer convolutional neural network is then jointly trained based on diagnostic quality images to discriminate the projection quality. The test phase performs feed-forward propagation over the generator network that demands a very low computational overhead. Extensive evaluations are performed on a large contrast-enhanced MR dataset of pediatric patients. In particular, images rated based on expert radiologists corroborate that GANCS retrieves high contrast images with detailed texture relative to conventional CS, and pixel-wise schemes. In addition, it offers reconstruction under a few milliseconds, two orders of magnitude faster than state-of-the-art CS-MRI schemes.
Meta-Learning via Feature-Label Memory Network
Deep learning typically requires training a very capable architecture using a large dataset. However, many important learning problems demand an ability to draw valid inferences from a small size dataset, and such problems pose a particular challenge for deep learning. In this regard, various researches on “meta-learning” are being actively conducted. Recent work has suggested a Memory Augmented Neural Network (MANN) for meta-learning. MANN is an implementation of a Neural Turing Machine (NTM) with the ability to rapidly assimilate new data in its memory, and use this data to make accurate predictions. In models such as the MANN, the input data samples and their appropriate labels from previous step are bound together in the same memory locations. This often leads to memory interference when performing a task as these models have to retrieve a feature of an input from a certain memory location and read only the label information bound to that location. In this paper, we tried to address this issue by presenting a more robust MANN. We revisited the idea of meta-learning and proposed a new memory augmented neural network by explicitly splitting the external memory into feature and label memories. The feature memory is used to store the features of input data samples and the label memory stores their labels. Hence, when predicting the label of a given input, the memory augmented network with separate feature and label memory unit uses the feature memory unit as a reference to extract the stored feature of the input, and based on that feature, it retrieves the label information of the input from the label memory unit. In order for the network to function in this framework, a new memory-writing module to encode label information into the label memory in accordance with the meta-learning task structure is designed. Here, we demonstrate that the memory-augmented network outperforms MANN by a large margin in supervised one-shot classification tasks using Omniglot and MNIST datasets.
Action-driven 3D indoor scene evolution
We introduce a framework for action-driven evolution of 3D indoor scenes, where the goal is to simulate how scenes are altered by human actions, and specifically, by object placements necessitated by the actions. To this end, we develop an action model with each type of action combining information about one or more human poses, one or more object categories, and spatial configurations of objects belonging to these categories which summarize the object-object and object-human relations for the action. Importantly, all these pieces of information are learned from annotated photos. Correlations between the learned actions are analyzed to guide the construction of an action graph. Starting with an initial 3D scene, we probabilistically sample a sequence of actions from the action graph to drive progressive scene evolution. Each action triggers appropriate object placements, based on object co-occurrences and spatial configurations learned for the action model. We show results of our scene evolution that lead to realistic and messy 3D scenes, as well as quantitative evaluations by user studies which compare our method to manual scene creation and state-of-the-art, data-driven methods, in terms of scene plausibility and naturalness.
Assessment of oxidative stress in saliva of children with dental erosion
OBJECTIVE To evaluate oxidative stress in saliva of children with dental erosion as compared to children with no erosion. METHODS One single examiner, trained and prepared to make diagnosis of dental erosion according to the Basic Erosive Wear Examination index, selected 40 children aged 4 to 6 years, who attended a pediatric dentistry prevention clinic. Two groups were formed - one comprising children with dental erosion (n=22), and another with no dental erosion (n=18). The quantity of dental biofilm was verified using the Simplified Index of Oral Hygiene, and unstimulated saliva was collected for biochemical analyses. The following were assessed in saliva: flow rate, buffering capacity, pH, and total protein concentration. Malondialdehyde levels were also verified to determine oxidative stress and total antioxidant status. RESULTS The quantity of biofilm was smaller in children with mean dental erosion±standard deviation (0.76±0.25), as compared to those with no dental erosion (1.18±0.28). There was no statistical difference in saliva parameters of oxidative stress in children with dental erosion. CONCLUSION The activity of oxidative stress in saliva did not influence dental erosion process when in its early stages.
A survey: Several technologies of non-orthogonal transmission for 5G
One key advantage of 4G OFDM system is the relatively simple receiver implementation due to the orthogonal resource allocation. However, from sum-capacity and spectral efficiency points of view, orthogonal systems are never the achieving schemes. With the rapid development of mobile communication systems, a novel concept of non-orthogonal transmission for 5G mobile communications has attracted researches all around the world. In this trend, many new multiple access schemes and waveform modulation technologies were proposed. In this paper, some promising ones of them were discussed which include Non-orthogonal Multiple Access (NOMA), Sparse Code Multiple Access (SCMA), Multi-user Shared Access (MUSA), Pattern Division Multiple Access (PDMA) and some main new waveforms including Filter-bank based Multicarrier (FBMC), Universal Filtered Multi-Carrier (UFMC), Generalized Frequency Division Multiplexing (GFDM). By analyzing and comparing features of these technologies, a research direction of guiding on future 5G multiple access and waveform are given.
Synopsis of Theorizing Crime and Deviance: A New Perspective
Synopsis of Theorizing Crime and Deviance: A New Perspective Sage Publications 2012Theorizing Crime and Deviance is, as the subtitle suggests, an attempt to outline a new perspective in criminological theory. This new perspective makes no claims to be a 'general theory' or an 'integrated theory' of crime causation, if integration is defined as a broad synthesis of existing theories as the first move towards a unified theory. The aim is to lay some initial foundations on which an alternative theoretical framework with some explanatory power and developmental potential can be constructed. Whether it can be integrated with the mainstream canon remains to be seen.The book begins with the claim that post-war criminological theory on the left side of the political divide has been avoiding what should be criminology' s primary question: 'why individuals or corporate bodies are willing to risk the infliction of harm on others in order to further their own instrumental or expressive interests' (p.1). Criminologists on the conservative/classical liberal side of the political divide do not shirk this aetiological duty, but their general discourse suffers from four fatal flaws: firstly, they do not explore the ontological field of harm to challenge orthodox legal definitions of crime; secondly, they tend to ignore crimes of the powerful and instead focus disproportionately on crimes of the powerless; thirdly, they tend to use rather crude positivist methods and categories that cannot capture the complexity of social life; fourthly, they regard human nature as prone to 'evil', which of course means that, rather conveniently, they don't really need any aetiological theories of subjectivity over and above those concerned with the maintenance of discipline and socialisation.Left-wing (or 'liberal', in US parlance) criminology's flight from aetiology has left it rather vulnerable to criticism. Its critics find it too easy to say, with some justification, that it cannot recognise or explain its own principal object, expressed as the primary question above. If it cannot answer this question, how can the population have any faith in its ability to lead society on a path away from the harms caused by crime and other corrupt practices? This inability also creates a huge vacuum into which right-wing intellectual and political contenders flow with glee, offering populist explanations and punitive solutions. Politically, left-liberal criminology shoots itself in the foot with its own constant vacillation around its primary object. When crime rates increased in 1960s and 1970s despite increases in freedom and affluence and the (albeit temporary) truncation of social inequality, which refuted the liberal left's principle explanations of relative deprivation and repression, criminology entered what Jock Young called its 'aetiological crisis'. There appeared to be a pressing need to return to the questions of causality and motivation. However, the liberal left, which still retains the annoying habit of sneering at the very idea of causality as an affront to its deconstructive sophistication, was not up to the job.This palpable failure allowed US and British right realists to lever themselves back into criminology's intellectual driving seat in the 1980s to join right-wing economists in the corridors of power. Right realism supplied neoliberal governments with intellectual support for the massive incarceration and private/public securitisation programme that remains with us today, and which must be considered as one of the main reasons behind the recent statistical 'crime decline'. The right won the day, and part of the reason why was that the left had quit, at least on the aetiological battlefield. Thus the right can continue to advance their favourite primary causes of personal choice, evil individuals and irresponsible parenting because the liberal left, running scared of aetiology and fixated on problems of linguistic definition and the attenuation of social reaction, have few if any roadworthy alternatives. …
Towards a classification of Euler-Kirchhoff filaments
Euler-Kirchhoff filaments are solutions of the static Kirchhoff equations for elastic rods with circular cross-sections. These equations are known to be formally equivalent to the Euler equations for spinning tops. This equivalence is used to provide a classification of the different shapes a filament can assume. Explicit formulas for the different possible configurations and specific results for interesting particular cases are given. In particular, conditions for which the filament has points of self-intersection, self-tangency, vanishing curvature or when it is closed or localized in space are provided. The average properties of generic filaments are also studied. They are shown to be equivalent to helical filaments on long length scales.
Feature Generation Using General Constructor Functions
Most classification algorithms receive as input a set of attributes of the classified objects. In many cases, however, the supplied set of attributes is not sufficient for creating an accurate, succinct and comprehensible representation of the target concept. To overcome this problem, researchers have proposed algorithms for automatic construction of features. The majority of these algorithms use a limited predefined set of operators for building new features. In this paper we propose a generalized and flexible framework that is capable of generating features from any given set of constructor functions. These can be domain-independent functions such as arithmetic and logic operators, or domain-dependent operators that rely on partial knowledge on the part of the user. The paper describes an algorithm which receives as input a set of classified objects, a set of attributes, and a specification for a set of constructor functions that contains their domains, ranges and properties. The algorithm produces as output a set of generated features that can be used by standard concept learners to create improved classifiers. The algorithm maintains a set of its best generated features and improves this set iteratively. During each iteration, the algorithm performs a beam search over its defined feature space and constructs new features by applying constructor functions to the members of its current feature set. The search is guided by general heuristic measures that are not confined to a specific feature representation. The algorithm was applied to a variety of classification problems and was able to generate features that were strongly related to the underlying target concepts. These features also significantly improved the accuracy achieved by standard concept learners, for a variety of classification problems.
Random Forest Classifiers : A Survey and Future Research Directions
Random Forest is an ensemble supervised machine learning technique. Machine learning techniques have applications in the area of Data mining. Random Forest has tremendous potential of becoming a popular technique for future classifiers because its performance has been found to be comparable with ensemble techniques bagging and boosting. Hence, an in-depth study of existing work related to Random Forest will help to accelerate research in the field of Machine Learning. This paper presents a systematic survey of work done in Random Forest area. In this process, we derived Taxonomy of Random Forest Classifier which is presented in this paper. We also prepared a Comparison chart of existing Random Forest classifiers on the basis of relevant parameters. The survey results show that there is scope for improvement in accuracy by using different split measures and combining functions; and in performance by dynamically pruning a forest and estimating optimal subset of the forest. There is also scope for evolving other novel ideas for stream data and imbalanced data classification, and for semi-supervised learning. Based on this survey, we finally presented a few future research directions related to Random Forest classifier.
Neural Networks applied to wireless communications
This paper presents a time-delayed neural network (TDNN) model that has the capability of learning and predicting the dynamic behavior of nonlinear elements that compose a wireless communication system. This model could help speeding up system deployment by reducing modeling time. This paper presents results of effective application of the TDNN model to an amplifier, part of a wireless transmitter.
Why Knowledge Management Systems Fail ? Enablers and Constraints of Knowledge Management in Human Enterprises
Drawing upon lessons learned from the biggest failure of knowledge management in recent world history and the debacle of the 'new economy' enterprises, this chapter explains why knowledge management systems (KMS) fail and how risk of such failures may be minimized. The key thesis is that enablers of KMS designed for the 'knowledge factory' engineering paradigm often unravel and become constraints in adapting and evolving such systems for business environments characterized by high uncertainty and radical discontinuous change. Design of KMS should ensure that adaptation and innovation of business performance outcomes occurs in alignment with changing dynamics of the business environment. Simultaneously, conceiving multiple future trajectories of the information technology and human inputs embedded in the KMS can diminish the risk of rapid obsolescence of such systems. Envisioning business models not only in terms of knowledge harvesting processes for seeking optimization and efficiencies, but in combination with ongoing knowledge creation processes would ensure that organizations not only succeed in doing the thing right in the short term but also in doing the right thing in the long term. Embedding both these aspects in enterprise business models as simultaneous and parallel sets of knowledge processes instead of treating them in isolation would facilitate ongoing innovation of business value propositions and customer value propositions.
Story extraction from the Web: A case study in security informatics
Open source intelligence is becoming more and more important in security-related applications. Effective extraction of valuable information from these intelligence sources so as to understand the content of intelligence is one of the central issues in security domain. To address this key problem, this paper studies how to extract domain events and generate story representation of the events. We implement a story extraction platform (SEP) which applies pattern matching to extract events from Web news and organizes events based on theme using narrative structure. We also design extraction rules, use domain-specific features and employ ontology to facilitate story extraction in SEP. The experimental results show the effectiveness of our system in security informatics.
A typology for the classification , description and valuation of ecosystem functions , goods and services
An increasing amount of information is being collected on the ecological and socio-economic value of goods and services provided by natural and semi-natural ecosystems. However, much of this information appears scattered throughout a disciplinary academic literature, unpublished government agency reports, and across the World Wide Web. In addition, data on ecosystem goods and services often appears at incompatible scales of analysis and is classified differently by different authors. In order to make comparative ecological economic analysis possible, a standardized framework for the comprehensive assessment of ecosystem functions, goods and services is needed. In response to this challenge, this paper presents a conceptual framework and typology for describing, classifying and valuing ecosystem functions, goods and services in a clear and consistent manner. In the following analysis, a classification is given for the fullest possible range of 23 ecosystem functions that provide a much larger number of goods and services. In the second part of the paper, a checklist and matrix is provided, linking these ecosystem functions to the main ecological, socio–cultural and economic valuation methods. © 2002 Elsevier Science B.V. All rights reserved.
Corpora – the best possible solution for tracking rare phenomena in underresourced languages : clitics in Bosnian , Croatian and Serbian
Complex linguistic phenomena, such as Clitic Climbing in Bosnian, Croatian and Serbian, are often described intuitively, only from the perspective of the main tendency. In this paper, we argue that web corpora currently offer the best source of empirical material for studying Clitic Climbing in BCS. They thus allow the most accurate description of this phenomenon, as less frequent constructions can be tracked only in big, well-annotated data sources. We compare the properties of web corpora for BCS with traditional sources and give examples of studies on CC based on web corpora. Furthermore, we discuss problems related to web corpora and suggest some improvements for
Ethical Hacking : The Security Justification
Abstract The state of security on the Internet is poor and progress toward increased protection is slow. This has given rise to a class of action referred to as “Ethical Hacking”. Companies are releasing software with little or no testing and no formal verification and expecting consumers to debug their product for them. For dot.com companies time-to-market is vital, security is not perceived as a marketing advantage, and implementing a secure design process an expensive sunk expense such that there is no economic incentive to produce bug-free software. There are even legislative initiatives to release software manufacturers from legal responsibility to their defective software.
Significant mitral regurgitation left untreated at the time of aortic valve replacement: a comprehensive review of a frequent entity in the transcatheter aortic valve replacement era.
Significant mitral regurgitation (MR) is frequent in patients with severe aortic stenosis (AS). In these cases, concomitant mitral valve repair or replacement is usually performed at the time of surgical aortic valve replacement (SAVR). Transcatheter aortic valve replacement (TAVR) has recently been considered as an alternative for patients at high or prohibitive surgical risk. However, concomitant significant MR in this setting is typically left untreated. Moderate to severe MR after aortic valve replacement is therefore a relevant entity in the TAVR era. The purpose of this review is to present the current knowledge on the clinical impact and post-procedural evolution of concomitant significant MR in patients with severe AS who have undergone aortic valve replacement (SAVR and TAVR). This information could contribute to improving both the clinical decision-making process in and management of this challenging group of patients.
Social media addiction: What is the role of content in YouTube?
Background YouTube, the online video creation and sharing site, supports both video content viewing and content creation activities. For a minority of people, the time spent engaging with YouTube can be excessive and potentially problematic. Method This study analyzed the relationship between content viewing, content creation, and YouTube addiction in a survey of 410 Indian-student YouTube users. It also examined the influence of content, social, technology, and process gratifications on user inclination toward YouTube content viewing and content creation. Results The results demonstrated that content creation in YouTube had a closer relationship with YouTube addiction than content viewing. Furthermore, social gratification was found to have a significant influence on both types of YouTube activities, whereas technology gratification did not significantly influence them. Among all perceived gratifications, content gratification had the highest relationship coefficient value with YouTube content creation inclination. The model fit and variance extracted by the endogenous constructs were good, which further validated the results of the analysis. Conclusion The study facilitates new ways to explore user gratification in using YouTube and how the channel responds to it.
Effectiveness of Japanese SHARE model in improving Taiwanese healthcare personnel's preference for cancer truth telling.
BACKGROUND Communication skills training (CST) based on the Japanese SHARE model of family-centered truth telling in Asian countries has been adopted in Taiwan. However, its effectiveness in Taiwan has only been preliminarily verified. This study aimed to test the effect of SHARE model-centered CST on Taiwanese healthcare providers' truth-telling preference, to determine the effect size, and to compare the effect of 1-day and 2-day CST programs on participants' truth-telling preference. METHOD For this one-group, pretest-posttest study, 10 CST programs were conducted from August 2010 to November 2011 under certified facilitators and with standard patients. Participants (257 healthcare personnel from northern, central, southern, and eastern Taiwan) chose the 1-day (n = 94) or 2-day (n = 163) CST program as convenient. Participants' self-reported truth-telling preference was measured before and immediately after CST programs, with CST program assessment afterward. RESULTS The CST programs significantly improved healthcare personnel's truth-telling preference (mean pretest and posttest scores ± standard deviation (SD): 263.8 ± 27.0 vs. 281.8 ± 22.9, p < 0.001). The CST programs effected a significant, large (d = 0.91) improvement in overall truth-telling preference and significantly improved method of disclosure, emotional support, and additional information (p < 0.001). Participation in 1-day or 2-day CST programs did not significantly affect participants' truth-telling preference (p > 0.05) except for the setting subscale. Most participants were satisfied with the CST programs (93.8%) and were willing to recommend them to colleagues (98.5%). CONCLUSIONS The SHARE model-centered CST programs significantly improved Taiwanese healthcare personnel's truth-telling preference. Future studies should objectively assess participants' truth-telling preference, for example, by cancer patients, their families, and other medical team personnel and at longer times after CST programs.
Reducing MOSFET 1/f noise and power consumption by switched biasing
Switched biasing is proposed as a technique for reducing the 1/f noise in MOSFET's. Conventional techniques, such as chopping or correlated double sampling, reduce the effect of 1/f noise in electronic circuits, whereas the switched biasing technique reduces the 1/f noise itself. Whereas noise reduction techniques generally lead to more power consumption, switched biasing can reduce the power consumption. It exploits an intriguing physical effect: cycling a MOS transistor from strong inversion to accumulation reduces its intrinsic 1/f noise. As the 1/f noise is reduced at its physical roots, high frequency circuits, in which 1/f noise is being upconverted, can also benefit. This is demonstrated by applying switched biasing in a 0.8 /spl mu/m CMOS sawtooth oscillator. By periodically switching off the bias currents, during time intervals that they are not contributing to the circuit operation, a reduction of the 1/f noise induced phase noise by more than 8 dB is achieved, while the power consumption is also reduced by 30%.
Intel® Software Guard Extensions (Intel® SGX) Software Support for Dynamic Memory Allocation inside an Enclave
Intel® Software Guard Extensions (Intel® SGX) SGX2 extends the Intel® Software Guard Extensions (SGX) instruction set and enables software developers to dynamically manage memory within the SGX environment. This paper reviews the current SGX Software RunTime Environment and proposes additions to the framework which will allow developers to benefit from features enabled by SGX2 such as dynamic heap management, stack expansion, and thread context creation.
A 0.18μm CMOS fully integrated RFDAC and VGA for WCDMA transmitters
A digital IF to RF converter architecture with an RF VGA targeted for WCDMA transmitters is presented. The RFDAC consists of a 1.5-bits current steering DAC with an embedded semi-digital FIR filter and a mixer with an LC load. The VGA controls the output power in 10 dB steps. The prototype is fabricated in 0.18 mum CMOS technology and achieves 0 dBm output power at 2 GHz and 33 dB image rejection. The measured rms EVM and ACPR show the system fulfills the standard specifications for the WCDMA. The chip draws 123.9 mA from 1.8 V supply. The die area is 0.8 times 1.6 mm2.
Detection of acute kidney injury in premature asphyxiated neonates by serum neutrophil gelatinase-associated lipocalin (sNGAL) – sensitivity and specificity of a potential new biomarker
INTRODUCTION Acute kidney injury (AKI) is common in neonatal intensive care units (NICU). In recent years, every effort is made for early detection of AKI. Our hypothesis was that serum neutrophil gelatinase-associated lipocalin (sNGAL) may be a reliable screening test for early diagnosis of AKI in premature neonates after perinatal asphyxia. Therefore, our aim was to assess the diagnostic accuracy of sNGAL for AKI in premature asphyxiated neonates. MATERIALS AND METHODS AKI was defined in the third day of life (DOL 3) as a serum creatinine (sCr) increase ≥26.5 μmol/L from baseline (the lowest previous sCr). According to the increase of sCr, AKI patients were divided in AKIN1 (sCr increase up to 1.9 baseline) and AKIN2 (sCr increase from 2.0 to 2.9 baseline). sNGAL levels were measured on DOL 1, 3 and 7. RESULTS AKI was diagnosed in 73 (0.676) of 108 enrolled premature asphyxiated neonates. Sixty one patients (0.836) were classified in AKIN1 and 12 patients (0.164) in AKIN2. sNGAL reached the maximal concentrations on DOL 1 within 4 hours after admission to NICU, being higher in AKI compared with no-AKI group (160.8±113.1 vs. 87.1±81.6; P<0.001) as well as in AKIN2 compared with AKIN1 group (222.8±112.9 vs. 147.8±109.9; P<0.001). The best areas under the receiver operating characteristic curves (AUC) for prediction of AKI were 0.72 [95% (0.62-0.80) P<0.001] on DOL1 at 2h and 0.72 [95% (0.63-0.80) P<0.001] at 4th hour after admission respectively. The corresponding sNGAL cutoff concentrations were 84.87 ng/mL (sensitivity 69.0% and specificity 71.9%) and 89.43 ng/mL (sensitivity 65.7% and specificity 74.3%). CONCLUSIONS In premature asphyxiated neonates sNGAL measured within the first 4 hours of DOL 1 is predictive of the occurrence and severity of AKI. Therefore, plasma levels of NGAL may be used for early diagnosis of AKI in these patients.
Parametric and Non-parametric User-aware Sentiment Topic Models
The popularity of Web 2.0 has resulted in a large number of publicly available online consumer reviews created by a demographically diverse user base. Information about the authors of these reviews, such as age, gender and location, provided by many on-line consumer review platforms may allow companies to better understand the preferences of different market segments and improve their product design, manufacturing processes and marketing campaigns accordingly. However, previous work in sentiment analysis has largely ignored these additional user meta-data. To address this deficiency, in this paper, we propose parametric and non-parametric User-aware Sentiment Topic Models (USTM) that incorporate demographic information of review authors into topic modeling process in order to discover associations between market segments, topical aspects and sentiments. Qualitative examination of the topics discovered using USTM framework in the two datasets collected from popular online consumer review platforms as well as quantitative evaluation of the methods utilizing those topics for the tasks of review sentiment classification and user attribute prediction both indicate the utility of accounting for demographic information of review authors in opinion mining.
User Interface Tools for Navigation in Conditional Probability Tables and Elicitation of Probabilities in Bayesian Networks
Elicitation of probabilities is one of the most laborious tasks in building decision-theoretic models, and one that has so far received only moderate attention in decision-theoretic sys­ tems. We propose a set of user interface tools for graphical probabilistic models, fo­ cusing on two aspects of probability elici­ tation: (1) navigation through conditional probability tables and (2) interactive graph­ ical assessment of discrete probability dis­ tributions. We propose two new graphical views that aid navigation in very large condi­ tional probability tables: the CPTREE (Con­ ditional Probability Tree) and the sCPT (shrinkable Conditional Probability Table). Based on what is known about graphical pre­ sentation of quantitative data to humans, we offer several useful enhancements to probabil­ ity wheel and bar graph, including different chart styles and options that can be adapted to user preferences and needs. We present the results of a simple usability study that proves the value of the proposed tools.
How to Perform a Transrectal Ultrasound Examination of the Lumbosacral and Sacroiliac Joints
There is increasing interest in pathology of the lumbosacral and sacroiliac joints giving rise to stiffness and/or lameness and decreased performance in equine sports medicine. Pain arising from these regions can be problematic alone or in conjunction with lameness arising from other sites (thoracolumbar spine, hind limbs, or forelimbs). Localization of pain to this region is critically important through clinical assessment, diagnostic anesthesia, and imaging. In general, diagnostic imaging of the axial skeleton and pelvis is difficult to perform and to interpret. Radiography is infrequently performed. To obtain good-quality diagnostic radiographs, general anesthesia, a high-output radiographic generator, and special techniques must be performed. Variation in the size and shape of the sacroiliac joints and sacral wings and caudal sacral osteophytes are common; special techniques for taking radiographs have allowed for identification of these structures and the inter-transverse joints. These authors urge caution in the interpretation of lesions identified on radiography in the absence of other diagnostic imaging and clinical examination. Nuclear scintigraphy is an important component of work-up for sacroiliac region pain, but limitations exist. Several reports exist detailing the anatomy and technique findings in normal horses and findings in lame horses. Patient motion, camera positioning, and muscle asymmetry can cause errors in interpretation. In normal horses, the appearance of the sacroiliac region varies with age but is generally symmetric. In horses with sacroiliac problems, it is more difficult to distinguish the tubera sacrale from the sacroiliac joint than in normal horses, and, in horses with lameness, there is more asymmetry detected. Techniques for percutaneous and transrectal ultrasound examination have been described, and
Switching to anastrozole after tamoxifen improves survival in postmenopausal women with breast cancer
design and interventiOn This prospective, randomized, open-­label trial recruited postmenopausal women (age ≤70 years) with histologically verified grade 1–3 invasive breast cancer who had undergone primary surgery (with or without radiotherapy) and had received 2 years of continuous tamoxifen therapy (20 or 30 mg/day) between December 1996 and August 2002. Patients were randomly allocated to either treatment with anastrozole, (1 mg/day) or continu-­ ation of tamoxifen therapy (20 or 30 mg/day), for 3 years. Patients were monitored for disease recurrence and adverse events every 6 months for 3 years and every 12 months thereafter.
Stunting Is Associated with Food Diversity while Wasting with Food Insecurity among Underfive Children in East and West Gojjam Zones of Amhara Region, Ethiopia
BACKGROUND Food insecurity has detrimental effects in protecting child undernutrition.This study sought to determine the level of child undernutrition and its association with food insecurity. METHODS A community based comparative cross-sectional study design involving multistage sampling technique was implemented from 24th of May to 20th of July 2013. Using two population proportion formula, a total of 4110 randomly selected households were included in the study. Availability of the productive safety net programme was used for grouping the study areas. A multiple linear regression model was used to assess the association between food insecurity and child malnutrition. Clustering effects of localities were controlled during analysis. RESULTS Stunting (37.5%), underweight (22.0%) and wasting (17.1%) were observed in East Gojjam zone, while 38.3% stunting, 22.5% underweight, and 18.6% wasting for the West Gojjam zone. Food insecurity was significantly associated with wasting (β = - 0.108, P < 0.05).Food diversity and number of meals the child ate per day significantly associated with stunting (β = 0.039, P < 0.01) and underweight (β = 0.035, P < 0.05) respectively. Residential area was the significant predictor of all indices. CONCLUSION The magnitude of child undernutrition was found to be very high in the study areas. Food insecurity was the significant determinant of wasting. Food diversity and number of meals the child ate per day were the significant determinants of stunting and underweight respectively. Child nutrition intervention strategies should take into account food security, dietary diversity, and carefully specified with regard to residential locations. Addressing food insecurity is of paramount importance.
Vulvar squamous cell carcinoma treated using the Mohs technique: case and assessment of treatment strategies.
Although it is the most common malignancy of the vulva, squamous cell carcinoma (SCC) of the vulva is an uncommon tumor. Unlike most cutaneous SCC, vulvar SCC is not associated with chronic sun exposure. Invasive SCC of the vulva may arise de novo; may occur in association with chronic inflammatory lesions such as lichen sclerosis, hidradenitis suppurativa, Darier-White disease, and lichen planus; or may result from malignant transformation of human papillomavirus (HPV) infection. Vulvar SCC arising in association with a chronic inflammatory condition or de novo tends to have more aggressive clinical behavior than those that result from HPV infection. We present this case to heighten awareness of the most current recommended treatment algorithms for this uncommon gynecologic tumor and to underscore the role that Mohs micrographic surgery (MMS) can play in such therapy.
Central coherence, organizational strategy, and visuospatial memory in children and adolescents with anorexia nervosa.
The vast majority of studies in anorexia nervosa that have investigated the domains of central coherence, organizational strategy, and visuospatial memory have focused on adult samples. In addition, studies investigating visuospatial memory have focused on free recall. No study to date has reported the association between recognition memory and central coherence or organizational strategy in younger people with this disorder, yet the capacity to recognize previously seen visual stimuli may contribute to overall visuospatial ability. Therefore, we investigate these domains in children and adolescents with anorexia nervosa compared to age- and gender-matched healthy controls. There were no significant group differences in immediate, delayed, or recognition memory, central coherence, or organization strategy. When compared with controls, patients with anorexia nervosa scored significantly higher on accuracy and took significantly longer when copying the Rey Complex Figure Task. Caution must be taken when interpreting these findings due to lower-than-expected scores in memory performance in the control group and because of a potential lack of sensitivity in the measures used when assessing this younger population. For neuropsychological functions where no normative data exist, we need a deeper, more thorough knowledge of the developmental trajectory and its assessment in young people in the general population before drawing conclusions in anorexia nervosa.
Automatic Discovery of Subgoals in Reinforcement Learning using Diverse Density
This paper presents a method by which a reinforcement learning agent can automatically discover certain types of subgoals online. By creating useful new subgoals while learning, the agent is able to accelerate learning on the current task and to transfer its expertise to other, related tasks through the reuse of its ability to attain subgoals. The agent discovers subgoals based on commonalities across multiple paths to a solution. We cast the task of finding these commonalities as a multiple-instance learning problem and use the concept of diverse density to find solutions. We illustrate this approach using several gridworld tasks.
FEEDING THE ILLUSION OF GROWTH AND HAPPINESS : A REPLY TO HAGERTY AND VEENHOVEN
If your letter had praised everything of mine, I would not have been as pleased as I am by your attempt to disprove and reject certain points. I regard this as a mark of friendship and the other as one of adulation. But in return I ask you to listen with an open mind to my rebuttal. For what you say, if it were allowed to pass without any reply from me, would be too one-sided. It is always a source of satisfaction to come across an article where one is cited often, especially by two scholars who have contributed so much to advance the study of subjective well-being. Of course, my happiness would have been greater had the references been favorable, rather than unfavorable. (Hereafter I use happiness and satisfaction interchangeably.) I take it that the Hagerty-Veenhoven (hereafter H-V) article (2003) is a rebuttal of my 1995 paper (Easterlin 1995), because there is only one reference to time series results of studies by other scholars done in the almost 10-year period since publication of my article. Indeed, I believe I detect an echo of a similar critique by one of the authors of my 1974 article (cf. Easterlin 1974 and Veenhoven 1991; for comments on the latter, see Easterlin 2004 forthcoming). 3 Apparently the editor and referee(s) of this Journal also viewed the H-V paper as a comment on my 1995 article; otherwise it would be hard to explain the absence of the customary literature review and reconciliation of new and disparate results with those of prior work. It seems appropriate, therefore, to offer a few comments in response, especially since the conclusions of the H-V article will no doubt be cited often as substantially different from my own when, in fact, they are not. I will focus on the time series analysis in the section " Descriptive Statistics of Happiness and Income " (pp. 11-18) which I take to be the heart of their article. Until one is sure about the data, methodology, and results of the time series analysis, hypothesis testing is superfluous. 1 THE UNITED STATES I was quite surprised to find the one country whose data I thought I knew fairly well to be among the seven for whom a significant positive correlation is reported between happiness and income. I had found no significant relationship between happiness and time over a period in which GDP …
Cofabrication of Vacuum Field Emission Transistor (VFET) and MOSFET
Co-fabrication of a nanoscale vacuum field emission transistor (VFET) and a metal-oxide-semiconductor field effect transistor (MOSFET) is demonstrated on a silicon-on-insulator wafer. The insulated-gate VFET with a gap distance of 100 nm is achieved by using a conventional 0.18-μm process technology and subsequent photoresist ashing process. The VFET shows a turn-on voltage of 2 V at a cell current of 2 nA and a cell current of 3 μA at the operation voltage of 10 V with an ON/OFF current ratio of 104. The gap distance between the cathode and anode in the VFET is defined to be less than the mean free path of electrons in air, and consequently, the operation voltage is reduced to be less than the ionization potential of air molecules. This allows the relaxation of the vacuum requirement. The present integration scheme can be useful as it combines the advantages of both structures on the same chip.
Reduced-intensity conditioned allogeneic haematopoietic stem cell transplantation results in durable disease-free and overall survival in patients with poor prognosis myeloid and lymphoid malignancies: eighty-month follow-up
The long-term outcome of patients with haematological malignancies treated with reduced-intensity conditioned allogeneic peripheral blood stem cell transplantation is not known. We report the outcome of 79 patients with poor-risk myeloid and lymphoid malignancies transplanted with reduced-intensity conditioning (RIC) regimens. The diagnoses include AML/myelodysplastic syndrome (n=43), non Hodgkin's lymphoma (n=30), Hodgkin's lymphoma (n=3), ALL (n=2) and CML (n=1). For the entire cohort, the disease-free survival (DFS) and OS were 61.2 and 35.7%, respectively. Twenty patients relapsed, 18 within the first three years, and 14 patients succumbed to progressive disease. Overall, 31 patients died from transplant-related complications within the first three years. Day 100 non-relapse mortality correlated with a higher total nucleated cell dose in the graft (odds ratio: 3.9). For those in CR at 3 years, the DFS and OS were 84.2 and 81.1%, respectively. Furthermore, of 43 patients with active disease at the time of transplantation, 16 remained in CR after 3 years. The majority of the long-term survivors were functioning independently. One patient died from a second malignancy. No post-transplant lymphoproliferative disorder was seen. In conclusion, durable disease control was achieved after RIC allogeneic stem cell transplantation for patients with advanced myeloid and lymphoid malignancies.
Postural changes in blood pressure and the prevalence of orthostatic hypotension among home-dwelling elderly aged 75 years or older
This cross-sectional analysis of a population-based cohort investigates the postural changes in blood pressure (BP) and heart rate and assesses the prevalence of orthostatic hypotension (OH) and its associations with the medicines used by an elderly population. The study population (n=1000) was a random sample of persons aged 75 years or older in the City of Kuopio, Finland. In 2004, altogether, 781 persons participated in the study. After the exclusion of persons living in institutional care (n=82) and those without orthostatic test (n=46), the final study population comprised 653 home-dwelling elderly persons. OH was defined as a ⩾20 mm Hg drop of systolic BP or a ⩾10 mm Hg drop of diastolic BP or both 1 or 3 min after standing up from supine position. Systolic BP dropped for more than half of the home-dwelling elderly when they stood up from a supine to a standing position. The total prevalence of OH was 34% (n=220). No significant gender or age differences were seen. The prevalence of OH was related to the total number of medicines in regular use (P<0.05). OH and postural changes in BP are more common among the home-dwelling elderly than reported in previous studies. The prevalence of OH is related to the number of medicines in regular use. There is an obvious need to measure orthostatic BP of elderly persons, as low BP and OH are important risk factors especially among the frail elderly persons.
TIPS: context-aware implicit user identification using touch screen in uncontrolled environments
Due to the dramatical increase in popularity of mobile devices in the last decade, more sensitive user information is stored and accessed on these devices everyday. However, most existing technologies for user authentication only cover the login stage or only work in restricted controlled environments or GUIs in the post login stage. In this work, we present TIPS, a Touch based Identity Protection Service that implicitly and unobtrusively authenticates users in the background by continuously analyzing touch screen gestures in the context of a running application. To the best of our knowledge, this is the first work to incorporate contextual app information to improve user authentication. We evaluate TIPS over data collected from 23 phone owners and deployed it to 13 of them with 100 guest users. TIPS can achieve over 90% accuracy in real-life naturalistic conditions within a small amount of computational overhead and 6% of battery usage.
Understanding Human Motion and Gestures for Underwater Human-Robot Collaboration
In this paper, we present a number of robust methodologies for an underwater robot to visually detect, follow, and interact with a diver for collaborative task execution. We design and develop two autonomous diver-following algorithms, the first of which utilizes both spatialand frequency-domain features pertaining to human swimming patterns in order to visually track a diver. The second algorithm uses a convolutional neural network-based model for robust tracking-by-detection. In addition, we propose a hand gesture-based human-robot communication framework that is syntactically simpler and computationally more efficient than the existing grammar-based frameworks. In the proposed interaction framework, deep visual detectors are used to provide accurate hand gesture recognition; subsequently, a finite-state machine performs robust and efficient gesture-to-instruction mapping. The distinguishing feature of this framework is that it can be easily adopted by divers for communicating with underwater robots without using artificial markers or requiring memorization of complex language rules. Furthermore, we validate the performance and effectiveness of the proposed methodologies through extensive field experiments in closedand open-water environments. Finally, we perform a user interaction study to demonstrate the usability benefits of our proposed interaction framework compared to existing methods.
Strategy, advanced manufacturing technology and performance: empirical evidence from U.S. manufacturing firms
Ž . This study investigates the complex relationships among strategy, advanced manufacturing technology AMT and performance using survey responses from 160 U.S. manufacturing firms. In contrast to previous studies that emphasize only the flexibility dimension of AMT, this study adopts a multidimensional view of AMT by stressing the information processing capability inherent in AMTs. The study found support for four dimensions of AMT: information exchange and Ž . Ž . Ž . planning technology IEPT , product design technology PDT , low-volume flexible automation technology LVFAT , and Ž . high-volume automation technology HVAT . The results found also indicate empirical support for the study’s major premise that a fit between certain strategy–AMT dimensions will be associated with superior performance. Using the findings, the study discusses the implications of the findings and suggests several avenues for future research. q 2000 Elsevier Science B.V. All rights reserved.
Planetary gear set and automatic transmission simulation for machine design courses
Due to their unique ability to provide a variety of gear ratios in a very compact space, planetary gear systems are seen in many applications from small powered screw drivers to automobile automatic transmissions. The versatile planetary gear device is often studied as part of an undergraduate mechanical engineering program. Textbook presentations typically illustrate how the different planetary gear components are connected. Understanding of the operation of the planetary gear set can be enhanced using actual hardware or simulations that show how the components move relative to each other. The Department of Engineering Mechanics at the United States Air Force Academy has developed a computer simulation of the planetary gear set and the Chrysler 42LE automatic transmission. Called ‘‘PG-Sim,’’ the dynamic simulations complement a static textbook presentation. PG-Sim is used in several of our courses and assessment data clearly indicates students’ appreciation of its visual and interactive features. In this paper, we present an overview of PG-Sim and then describe how the simulation courseware facilitates understanding of the planetary gear system. 2003 Wiley Periodicals, Inc. Comput Appl Eng Educ 11: 144 155, 2003; Published online in Wiley InterScience (www.interscience.wiley.com.); DOI 10.1002/cae.10045
Neuroprotective Effects of PEP-1-Cu,Zn-SOD against Ischemic Neuronal Damage in the Rabbit Spinal Cord
A rabbit model of spinal cord ischemia has been introduced as a good model to investigate the pathophysiology of ischemia–reperfusion (I–R)-induced paraplegia. In the present study, we observed the effects of Cu,Zn-superoxide dismutase (SOD1) against ischemic damage in the ventral horn of L5–6 levels in the rabbit spinal cord. For this study, the expression vector PEP-1 was constructed, and this vector was fused with SOD1 to create a PEP-1-SOD1 fusion protein that easily penetrated the blood–brain barrier. Spinal cord ischemia was induced by transient occlusion of the abdominal aorta for 15 min. PEP-1-SOD1 (0.5 mg/kg) was intraperitoneally administered to rabbits 30 min before ischemic surgery. The administration of PEP-1-SOD1 significantly improved neurological scores compared to those in the PEP-1 (vehicle)-treated ischemia group. Also, in this group, the number of cresyl violet-positive cells at 72 h after I–R was much higher than that in the vehicle-treated ischemia group. Malondialdehyde levels were significantly decreased in the ischemic spinal cord of the PEP-1-SOD1-treated ischemia group compared to those in the vehicle-treated ischemia group. In contrast, the administration of PEP-1-SOD1 significantly ameliorated the ischemia–induced reduction of SOD and catalase levels in the ischemic spinal cord. These results suggest that PEP-1-SOD1 protects neurons from spinal ischemic damage by decreasing lipid peroxidation and maintaining SOD and catalase levels in the ischemic rabbit spinal cord.
Methodological challenges in cross-language qualitative research: a research review.
OBJECTIVES Cross-language qualitative research occurs when a language barrier is present between researchers and participants. The language barrier is frequently mediated through the use of a translator or interpreter. The purpose of this analysis of cross-language qualitative research was threefold: (1) review the methods literature addressing cross-language research; (2) synthesize the methodological recommendations from the literature into a list of criteria that could evaluate how researchers methodologically managed translators and interpreters in their qualitative studies; (3) test these criteria on published cross-language qualitative studies. DATA SOURCES A group of 40 purposively selected cross-language qualitative studies found in nursing and health sciences journals. REVIEW METHODS The synthesis of the cross-language methods literature produced 14 criteria to evaluate how qualitative researchers managed the language barrier between themselves and their study participants. To test the criteria, the researcher conducted a summative content analysis framed by discourse analysis techniques of the 40 cross-language studies. RESULTS The evaluation showed that only 6 out of 40 studies met all the criteria recommended by the cross-language methods literature for the production of trustworthy results in cross-language qualitative studies. Multiple inconsistencies, reflecting disadvantageous methodological choices by cross-language researchers, appeared in the remaining 33 studies. To name a few, these included rendering the translator or interpreter as an invisible part of the research process, failure to pilot test interview questions in the participant's language, no description of translator or interpreter credentials, failure to acknowledge translation as a limitation of the study, and inappropriate methodological frameworks for cross-language research. CONCLUSIONS The finding about researchers making the role of the translator or interpreter invisible during the research process supports studies completed by other authors examining this issue. The analysis demonstrated that the criteria produced by this study may provide useful guidelines for evaluating cross-language research and for novice cross-language researchers designing their first studies. Finally, the study also indicates that researchers attempting cross-language studies need to address the methodological issues surrounding language barriers between researchers and participants more systematically.
Underwater Wireless Optical Communication Channel Modeling and Performance Evaluation using Vector Radiative Transfer Theory
This paper presents the modeling of an underwater wireless optical communication channel using the vector radiative transfer theory. The vector radiative transfer equation captures the multiple scattering nature of natural water, and also includes the polarization behavior of light. Light propagation in an underwater environment encounters scattering effect creating dispersion which introduces inter-symbol-interference to the data communication. The attenuation effect further reduces the signal to noise ratio. Both scattering and absorption have adverse effects on underwater data communication. Using a channel model based on radiative transfer theory, we can quantify the scattering effect as a function of distance and bit rate by numerical Monte Carlo simulations. We also investigate the polarization behavior of light in the underwater environment, showing the significance of the cross-polarization component when the light encounters more scattering.
Once for All: A Two-Flow Convolutional Neural Network for Visual Tracking
The main challenges of visual object tracking arise from the arbitrary appearance of the objects that need to be tracked. Most existing algorithms try to solve this problem by training a new model to regenerate or classify each tracked object. As a result, the model needs to be initialized and retrained for each new object. In this paper, we propose to track different objects in an object-independent approach with a novel two-flow convolutional neural network (YCNN). The YCNN takes two inputs (one is an object image patch, the other is a larger searching image patch), then outputs a response map which predicts how likely and where the object would appear in the search patch. Unlike the object-specific approaches, the YCNN is actually trained to measure the similarity between the two image patches. Thus, this model will not be limited to any specific object. Furthermore, the network is end-to-end trained to extract both shallow and deep dedicated convolutional features for visual tracking. And once properly trained, the YCNN can be used to track all kinds of objects without further training and updating. As a result, our algorithm is able to run at a very high speed of 45 frames-per-second. The effectiveness of the proposed algorithm can also be proved by the experiments on two popular data sets: OTB-100 and VOT-2014.
Култукский острог. Факты и гипотезы
This article is devoted to study of the origin of Kultukskii Ostrog (wooden fortress) during a trip of a Russian Kosack troop lead by Ivan Pokhabov to Transbaikalia. Based on several indirect data author proposes the hypothesis of the fate of one of the earliest Russian settlements on Baikal Lake
Never-Ending Learning for Open-Domain Question Answering over Knowledge Bases
ABSTRACT Translating natural language questions to semantic representations such as SPARQL is a core challenge in open-domain question answering over knowledge bases (KB-QA). Existing methods rely on a clear separation between an offline training phase, where a model is learned, and an online phase where this model is deployed. Two major shortcomings of such methods are that (i) they require access to a large annotated training set that is not always readily available and (ii) they fail on questions from before-unseen domains. To overcome these limitations, this paper presents NEQA, a continuous learning paradigm for KB-QA. Offline, NEQA automatically learns templates mapping syntactic structures to semantic ones from a small number of training question-answer pairs. Once deployed, continuous learning is triggered on cases where templates are insufficient. Using a semantic similarity function between questions and by judicious invocation of non-expert user feedback, NEQA learns new templates that capture previously-unseen syntactic structures. This way, NEQA gradually extends its template repository. NEQA periodically re-trains its underlying models, allowing it to adapt to the language used after deployment. Our experiments demonstrate NEQA’s viability, with steady improvement in answering quality over time, and the ability to answer questions from new domains.
Acetabular component position of the noncemented total hip endoprosthesis after previous Chiari pelvic osteotomy.
INTRODUCTION The aim of the study was to determine the validity of acetabular component position of the noncemented total hip endoprosthesis after Chiari pelvic osteotomy. MATERIAL AND METHODS The study involved 75 patients operated on at the Institute of Orthopedic Surgery "Banjica" in the period from 1990-2009. The first group consisted of 39 patients (46 hips) who underwent Chiari pelvic osteotomy and also later the implantation of a noncemented total hip endoprosthesis. A control group consisted of 36 patients (47 hips) who underwent total hip arthroplasty due to degenerative hip dysplasia. RESULTS In the previously operated patients the centre of rotation of the hip was on the average placed more proximally, while in the control group of patients the position of the acetabular component was closer to the anatomical one. In the group of patients after Chiari osteotomy the mean acetabular cup abduction angle rated 41.8°±9.8°, while in the control group this value was on the average higher (45.4°±8.6°). DISCUSSION There was a significant difference between the studied groups in relation to the distance between the acetabular component of endoprosthesis and the acetabular teardrop (t=-2.763; p=0.007). No statistically significant difference was determined in the mean value of the angle of acetabular abduction component of endoprosthesis between the studied groups of patients (t=1.878; p=0.064). CONCLUSIONS Acetabular component position of the total hip endoprosthesis was not compromised by anatomic changes of the acetabulum caused by Chiari pelvic osteotomy.
TV Ads and Search Spikes : Toward a Deeper Understanding
Television advertisements lead some multitasking viewers to take immediate, measurable actions online. We analyze minute-by-minute TV ad insertion and online search data for daily fantasy sports and pick-up truck brands. TV ads with small audiences can produce detectable search spikes for the advertised brand, with 75% of incremental searches occurring within two minutes. TV ads also generate post-ad searches for competitor brands. Search spikes vary with ad content: they are larger after brandfocused ads than after price-focused ads, and after less-informative ads than after more-informative ads. Search spikes also vary with contextual media factors (e.g., TV network, daypart, program genre), but their effects differ across brands. Taken as a whole, our findings suggest that marketers should consider post-ad search spikes in conjunction with other metrics when evaluating and purchasing TV ads.
COMAN: a web server for comprehensive metatranscriptomics analysis
Microbiota-oriented studies based on metagenomic or metatranscriptomic sequencing have revolutionised our understanding on microbial ecology and the roles of both clinical and environmental microbes. The analysis of massive metatranscriptomic data requires extensive computational resources, a collection of bioinformatics tools and expertise in programming. We developed COMAN (Comprehensive Metatranscriptomics Analysis), a web-based tool dedicated to automatically and comprehensively analysing metatranscriptomic data. COMAN pipeline includes quality control of raw reads, removal of reads derived from non-coding RNA, followed by functional annotation, comparative statistical analysis, pathway enrichment analysis, co-expression network analysis and high-quality visualisation. The essential data generated by COMAN are also provided in tabular format for additional analysis and integration with other software. The web server has an easy-to-use interface and detailed instructions, and is freely available at http://sbb.hku.hk/COMAN/ COMAN is an integrated web server dedicated to comprehensive functional analysis of metatranscriptomic data, translating massive amount of reads to data tables and high-standard figures. It is expected to facilitate the researchers with less expertise in bioinformatics in answering microbiota-related biological questions and to increase the accessibility and interpretation of microbiota RNA-Seq data.
The role of melatonin and circadian phase in age-related sleep-maintenance insomnia: assessment in a clinical trial of melatonin replacement.
The present investigation used a placebo-controlled, double-blind, crossover design to assess the sleep-promoting effect of three melatonin replacement delivery strategies in a group of patients with age-related sleep-maintenance insomnia. Subjects alternated between treatment and "washout" conditions in 2-week trials. The specific treatment strategies for a high physiological dose (0.5 mg) of melatonin were: (1) EARLY: An immediate-release dose taken 30 minutes before bedtime; (2) CONTINUOUS: A controlled-release dose taken 30 minutes before bedtime; (3) LATE: An immediate-release dose taken 4 hours after bedtime. The EARLY and LATE treatments yielded significant and unambiguous reductions in core body temperature. All three melatonin treatments shortened latencies to persistent sleep, demonstrating that high physiological doses of melatonin can promote sleep in this population. Despite this effect on sleep latency, however, melatonin was not effective in sustaining sleep. No treatment improved total sleep time, sleep efficiency, or wake after sleep onset. Likewise, melatonin did not improve subjective self-reports of nighttime sleep and daytime alertness. Correlational analyses comparing sleep in the placebo condition with melatonin production revealed that melatonin levels were not correlated with sleep. Furthermore, low melatonin producers were not preferentially responsive to melatonin replacement. Total sleep time and sleep efficiency were correlated with the timing of the endogenous melatonin rhythm, and particularly with the phase-relationship between habitual bedtime and the phase of the circadian timing system.
Inf2vec: Latent Representation Model for Social Influence Embedding
As a fundamental problem in social influence propagation analysis, learning influence parameters has been extensively investigated. Most of the existing methods are proposed to estimate the propagation probability for each edge in social networks. However, they cannot effectively learn propagation parameters of all edges due to data sparsity, especially for the edges without sufficient observed propagation. Different from the conventional methods, we introduce a novel social influence embedding problem, which is to learn parameters for nodes rather than edges. Nodes are represented as vectors in a low-dimensional space, and thus social influence information can be reflected by these vectors. We develop a new model Inf2vec, which combines both the local influence neighborhood and global user similarity to learn the representations. We conduct extensive experiments on two real-world datasets, and the results indicate that Inf2vec significantly outperforms state-of-the-art baseline algorithms.
Early versus late fusion in semantic video analysis
Semantic analysis of multimodal video aims to index segments of interest at a conceptual level. In reaching this goal, it requires an analysis of several information streams. At some point in the analysis these streams need to be fused. In this paper, we consider two classes of fusion schemes, namely early fusion and late fusion. The former fuses modalities in feature space, the latter fuses modalities in semantic space. We show by experiment on 184 hours of broadcast video data and for 20 semantic concepts, that late fusion tends to give slightly better performance for most concepts. However, for those concepts where early fusion performs better the difference is more significant.
Fast scan algorithms on graphics processors
Scan and segmented scan are important data-parallel primitives for a wide range of applications. We present fast, work-efficient algorithms for these primitives on graphics processing units (GPUs). We use novel data representations that map well to the GPU architecture. Our algorithms exploit shared memory to improve memory performance. We further improve the performance of our algorithms by eliminating shared-memory bank conflicts and reducing the overheads in prior shared-memory GPU algorithms. Furthermore, our algorithms are designed to work well on general data sets, including segmented arrays with arbitrary segment lengths. We also present optimizations to improve the performance of segmented scans based on the segment lengths. We implemented our algorithms on a PC with an NVIDIA GeForce 8800 GPU and compared our results with prior GPU-based algorithms. Our results indicate up to 10x higher performance over prior algorithms on input sequences with millions of elements.
A Flexible Fitness Function for Community Detection in Complex Networks
Most community detection algorithms from the literature work as optimization tools that minimize a given \textit{fitness function}, while assuming that each node belongs to a single community. Since there is no hard concept of what a community is, most proposed fitness functions focus on a particular definition. As such, these functions do not always lead to partitions that correspond to those observed in practice. This paper proposes a new flexible fitness function that allows the identification of communities with distinct characteristics. Such flexibility was evaluated through the adoption of an immune-inspired optimization algorithm, named cob-aiNet[C], to identify both disjoint and overlapping communities in a set of benchmark networks. The results have shown that the obtained partitions are much closer to the ground-truth than those obtained by the optimization of the modularity function.
Cognitive outcomes following contemporary treatment without cranial irradiation for childhood acute lymphoblastic leukemia.
BACKGROUND Treatment of acute lymphoblastic leukemia (ALL) has included the use of prophylactic cranial irradiation in up to 20% of children with high-risk disease despite known cognitive risks of this treatment modality. METHODS Patients enrolled on the St Jude ALL Total Therapy Study XV, which omitted prophylactic cranial irradiation in all patients, were assessed 120 weeks after completion of consolidation therapy (n = 243) using a comprehensive cognitive battery. χ(2) analysis was used to compare the percentage of below-average performers among the entire ALL patient group to the expected rate based on the normative sample. Univariate logistic regression was used to estimate the effect of intensity of chemotherapy (treatment arm), age at diagnosis, and sex on the probability of below-average performance. All statistical tests were two-sided. RESULTS Overall, the ALL group had a statistically significantly higher risk for below-average performance on a measure of sustained attention (67.31% more than 1 SD below the normative mean for omission errors, P < .001) but not on measures of intellectual functioning, academic skills, or memory. Patients given higher intensity chemotherapy were at greater risk for below-average performance compared with those given lower intensity therapy on measures of processing speed (27.14% vs 6.25%, P = .009) and academic abilities (Math Reasoning: 18.60% vs 3.90%, P = .008; Word Reading: 20.00% vs 2.60%, P = .007; Spelling: 27.91% vs 3.90%, P = .001) and had higher parent-reported hyperactivity (23.00% vs 9.84%, P = .018) and learning problems (35.00% vs 16.39%, P = .005). Neither age at diagnosis nor sex was associated with risk for below-average cognitive performance. CONCLUSIONS Omitting cranial irradiation may help preserve global cognitive abilities, but treatment with chemotherapy alone is not without risks. Caregiver education and development of interventions should address both early attention deficits and cognitive late effects.
Image Quality Assessment for Color Correction Based on Color Contrast Similarity and Color Value Difference
Color correction plays an important role in the image processing field. But substantial research on the assessment of color correction is still insufficient. In this paper, we present an image quality assessment metric for color correction. It assesses the color consistency between the reference and target/result images of color correction according to their color contrast similarity and color value difference. Both the average difference and difference span are considered during the assessment. To compensate for the scene difference between the reference and target/result images of color correction, we propose to use an image registration algorithm to build their matching relationship, upon which a matching image is built. The matching image has the same scene as the target image and the same color feature as the reference image, and thus the matching image is regarded as the real reference image of our color correction assessment. Furthermore, we combine a confidence map of the matching image and a saliency map of the target/result image as a weighting map for assessment, which helps to improve the consistency between the objective and subjective assessment results. The experimental results show that our color correction assessment metric has better correlation, accuracy, and monotonicity with users’ subjective scores than 19 state-of-the-art metrics.