title
stringlengths 8
300
| abstract
stringlengths 0
10k
|
---|---|
Which unruptured cerebral aneurysms should be treated? A cost-utility analysis. | OBJECTIVE
To determine which unruptured cerebral aneurysms should be treated considering the risks. benefits, and costs.
BACKGROUND
Asymptomatic unruptured cerebral aneurysms are commonly treated by surgical clipping or endovascular coil embolization to prevent subarachnoid hemorrhage (SAH).
METHODS
We performed a cost-utility analysis comparing surgical clipping and endovascular coil embolization with no treatment for unruptured aneurysms. Eight clinical scenarios were defined based on aneurysm size, symptoms, and history of SAH from a different aneurysm. Health outcomes of a hypothetical cohort of 50-year-old women were modeled over the projected lifetime of the cohort. Costs were assessed from the societal perspective. We compared net quality-adjusted life years (QALYs) and cost per QALY of each therapy to no treatment.
RESULTS
For an asymptomatic unruptured aneurysm less than 10 mm in diameter in patients with no history of SAH from a different aneurysm, both procedures resulted in a net loss in QALYs, and confidence intervals (CI) were not compatible with a benefit from treatment (clipping, loss of 1.6 QALY [95% CI 1.1 to 2.1]; coiling, loss of 0.6 QALY [95% CI 0.2 to 0.8]). For larger aneurysms (> or = 10 mm), those producing symptoms by compressing neighboring nerves and brain structures, or in patients with a history of SAH from a different aneurysm, treatment was cost-effective. Coiling appeared more effective and cost-effective than clipping but these differences depended on relatively uncertain model parameters.
CONCLUSIONS
Treatment of small, asymptomatic, unruptured cerebral aneurysms in patients without a history of SAH worsens clinical outcomes, and thus is neither effective nor cost-effective. For aneurysms that are > or = 10 mm or symptomatic, or in patients with a history of SAH, treatment appears to be cost-effective. |
Asherman syndrome--one century later. | OBJECTIVE
To provide an update on the current knowledge of Asherman syndrome.
DESIGN
Literature review.
SETTING
The worldwide reports of this disease.
PATIENT(S)
Patients with Asherman syndrome who presented with amenorrhea or hypomenorrhea, infertility, or recurrent pregnancy loss.
INTERVENTION(S)
Hysteroscopy and hysteroscopic surgery have been the gold standard of diagnosis and treatment respectively for this condition.
MAIN OUTCOME MEASURE(S)
The etiology, pathology, symptomatology, diagnosis, treatment, and reproductive outcomes were analyzed.
RESULT(S)
This syndrome occurs mainly as a result of trauma to the gravid uterine cavity, which leads to the formation of intrauterine and/or intracervical adhesions. Despite the advances in hysteroscopic surgery, the treatment of moderate to severe Asherman syndrome still presents a challenge. Furthermore, pregnancy after treatment remains high risk with complications including spontaneous abortion, preterm delivery, intrauterine growth restriction, placenta accrete or praevia, or even uterine rupture.
CONCLUSION(S)
The management of moderate to severe disease still poses a challenge, and the prognosis of severe disease remains poor. Close antenatal surveillance and monitoring are necessary for women who conceive after treatment. |
Improvement of solubility of C70 by complexation with cyclomaltononaose (delta-cyclodextrin). | We investigated the solubilizing effects of cyclomaltononaose (delta-CD), a cyclic oligosaccharide composed of nine alpha-1,4-linked D-glucose units, on C70 by using the ball-milling method based on a solid-solid mechanochemical reaction. The complex between C70 and delta-CD was characterized by UV-VIS spectrometry and fast atom bombardment mass spectrometry (FAB-MS). Coloration of the C70/delta-CD system was red-brown in aqueous solution, and the UV-VIS spectrum was in agreement with that of C70 in hexane solution. The FAB-MS spectrum of the C70/delta-CD system showed a negative ion peak corresponding to the molecular weight of a complex between two delta-CD and one C70. These findings suggest that the solubilization of C70 in water was due to complex formation of C70 with delta-CD, and the stoichiometric ratio of this complex was 1: 2 (C70: delta-CD). |
HEDGING YOUR BETS : L 2 LEARNERS ’ ACQUISITION OF PRAGMATIC DEVICES IN ACADEMIC WRITING AND COMPUTER-MEDIATED | This study had two purposes: The first was to investigate the effects of instruction on pragmatic acquisition in writing. In particular, the focus was on the use of hedging devices in the academic writing of learners of English as a second language. The second purpose was to discover whether this training transferred to a less-planned, less-formal, computer-mediated type of writing, namely a Daedalus interaction. Graduate students enrolled in an academic writing class for non-native Englishspeakers received treatment designed to increase their metapragmatic awareness and improve their ability to use hedging devices. Data were compared to a control group that did not receive the treatment. The treatment group showed statistically significant increases in the use of hedging devices in the research papers and in the computer-mediated discussion. |
Predicting consumer intentions to use on-line shopping: the case for an augmented technology acceptance model | Derived from the theory of reasoned action, the technology acceptance model (TAM) focuses on two specific salient beliefs— ease of use and usefulness. It has been applied in the study of user adoption of different technologies, and has emerged as a reliable and robust model. However, this has not discouraged researchers from incorporating additional constructs to the original model in their quest for increased predictive power. Here, an attempt is made in the context of explaining consumer intention to use on-line shopping. Besides ease of use and usefulness, compatibility, privacy, security, normative beliefs, and selfefficacy are included in an augmented TAM. A test of this model, with data collected from 281 consumers, show support for seven of nine research hypotheses. Specifically, compatibility, usefulness, ease of use, and security were found to be significant predictors of attitude towards on-line shopping, but privacy was not. Further, intention to use on-line shopping was strongly influenced by attitude toward on-line shopping, normative beliefs, and self-efficacy. # 2003 Elsevier B.V. All rights reserved. |
Banana cultivars, cultivation practices, and physicochemical properties. | The physicochemical (pH, texture, Vitamin C, ash, fat, minerals) and sensory properties of banana were correlated with the genotype and growing conditions. Minerals in particular were shown to discriminate banana cultivars of different geographical origin quite accurately. Another issue relates to the beneficial properties of bananas both in terms of the high dietary fiber and antioxidant compounds, the latter being abundant in the peel. Therefore, banana can be further exploited for extracting several important components such as starch, and antioxidant compounds which can find industrial and pharmaceutical applications. Finally, the various storage methodologies were presented with an emphasis on Modified Atmosphere Packaging which appears to be one of the most promising of technologies. |
Blockchain: Distributed Event-based Processing in a Data-Centric World: Extended Abstract | Usage of Blockchain is expanding from the initial focus on crypto-currencies towards applications to support a broad range of collaborative activies amongst businesses, organizations, and individuals. There are two broad levels of Blockchain: the foundation level relates to encryption, consensus algorithms, and support for a single (logical) data store that is shared by all participants; and the "smart contract" level that enables developers and business-level users to specify the data, logic, and behavior that collaborations should follow. The smart contracts are programs that are fundamentally event driven, data-centric, and support the activity of a distributed set of stakeholders situated across multiple organizations. This raises an array of research challenges in areas including language and solution design, interoperation across smart contracts, and verification. |
Automatic melanoma detection via multi-scale lesion-biased representation and joint reverse classification | Dermoscopy image as a non-invasive diagnosis technique plays an important role for early diagnosis of malignant melanoma. Even for experienced dermatologists, however, diagnosis by human vision can be subjective, inaccurate and non-reproducible. This is attributed to the challenging image characteristics including varying lesion sizes and their shapes, fuzzy lesion boundaries, different skin color types and presence of hair. To aid in the image interpretation, automatic classification of dermoscopy images have been shown to be a valuable aid in the clinical decision making. Existing methods however have problems in representing and differentiating skin lesions due to high degree of similarities between melanoma and non-melanoma images and large variations inherited from skin lesion images. To overcome these limitations, this study proposes a new automatic melanoma detection method for dermoscopy images via multi-scale lesion-biased representation (MLR) and joint reverse classification (JRC). Our proposed MLR representation enable us to represent skin lesions using multiple closely related histograms derived from different rotations and scales while traditional methods can only represent skin lesion using a single-scale histogram. The MLR representation was then used with JRC for melanoma detection. The proposed JRC model allows us to use a set of closely related histograms to derive additional information for melanoma detection, where existing methods mainly rely on histogram itself. Our method was evaluated on a public dataset of dermoscopy images, and we demonstrate superior classification performance compared to the current state-of-the-art methods. |
De novo Transcriptome Sequencing to Dissect Candidate Genes Associated with Pearl Millet-Downy Mildew (Sclerospora graminicola Sacc.) Interaction | Understanding the plant-pathogen interactions is of utmost importance to design strategies for minimizing the economic deficits caused by pathogens in crops. With an aim to identify genes underlying resistance to downy mildew, a major disease responsible for productivity loss in pearl millet, transcriptome analysis was performed in downy mildew resistant and susceptible genotypes upon infection and control on 454 Roche NGS platform. A total of ~685 Mb data was obtained with 1 575 290 raw reads. The raw reads were pre-processed into high-quality (HQ) reads making to ~82% with an average of 427 bases. The assembly was optimized using four assemblers viz. Newbler, MIRA, CLC and Trinity, out of which MIRA with a total of 14.10 Mb and 90118 transcripts proved to be the best for assembling reads. Differential expression analysis depicted 1396 and 936 and 1000 and 1591 transcripts up and down regulated in resistant inoculated/resistant control and susceptible inoculated/susceptible control respectively with a common of 3644 transcripts. The pathways for secondary metabolism, specifically the phenylpropanoid pathway was up-regulated in resistant genotype. Transcripts up-regulated as a part of defense response included classes of R genes, PR proteins, HR induced proteins and plant hormonal signaling transduction proteins. The transcripts for skp1 protein, purothionin, V type proton ATPase were found to have the highest expression in resistant genotype. Ten transcripts, selected on the basis of their involvement in defense mechanism were validated with qRT-PCR and showed positive co-relation with transcriptome data. Transcriptome analysis evoked potentials of hypersensitive response and systemic acquired resistance as possible mechanism operating in defense mechanism in pearl millet against downy mildew infection. |
A twin study of genetic and dietary influences on nephrolithiasis: a report from the Vietnam Era Twin (VET) Registry. | BACKGROUND
Nephrolithiasis is a complex phenotype that is influenced by both genetic and environmental factors. We conducted a large twin study to examine genetic and nongenetic factors associated with stones.
METHODS
The VET Registry includes approximately 7500 male-male twin pairs born between 1939 to 1955 with both twins having served in the military from 1965 to 1975. In 1990, a mail and telephone health survey was sent to 11,959 VET Registry members; 8870 (74.2%) provided responses. The survey included a question asking if the individual had ever been told of having a kidney stone by a physician. Detailed dietary habits were elicited. In a classic twin study analysis, we compared concordance rates in monozygotic (MZ) and dizygotic (DZ) twins. We also conducted a cotwin control study of dietary risk factors in twins discordant for stones.
RESULTS
Among dizygotic twins, there were 17 concordant pairs and 162 discordant pairs for kidney stones. Among monozygotic twins, there were 39 concordant pairs and 163 discordant pairs. The proband concordance rate in MZ twins (32.4%) was significantly greater than the rate in DZ twins (17.3%) (chi(2)= 12.8; P < 0.001), consistent with a genetic influence. The heritability of the risk for stones was 56%. In the multivariate analysis of twin pairs discordant for kidney stones, we found a protective dose-response pattern of coffee drinking (P= 0.03); those who drank 5 or more cups of coffee were half as likely to develop kidney stones as those who did not drink coffee (OR = 0.4, 95% CI 0.2, 0.9). Those who drank at least 1 cup of milk per day were half as likely to report kidney stones (OR = 0.5, 95% CI 0.3, 0.8). There were also marginally significant protective effects of increasing numbers of cups of tea per day and frequent consumption of fruits and vegetables. Other factors such as the use of calcium supplements, alcohol drinking, consumption of solid dairy products, and the amount of animal protein consumed were not significantly related to kidney stones in the multivariate model.
CONCLUSION
These results confirm that nephrolithiasis is at least in part a heritable disease. Coffee, and perhaps tea, fruits, and vegetables were found to be protective for stone disease. This is the first twin study of kidney stones, and represents a new approach to elucidating the relative roles of genetic and environmental factors associated with stone formation. |
Exome sequencing identifies NBEAL2 as the causative gene for Gray Platelet Syndrome | Gray platelet syndrome (GPS) is a predominantly recessive platelet disorder that is characterized by mild thrombocytopenia with large platelets and a paucity of α-granules; these abnormalities cause mostly moderate but in rare cases severe bleeding. We sequenced the exomes of four unrelated individuals and identified NBEAL2 as the causative gene; it has no previously known function but is a member of a gene family that is involved in granule development. Silencing of nbeal2 in zebrafish abrogated thrombocyte formation. |
Many-to-Many Geographically-Embedded Flow Visualisation: An Evaluation | Showing flows of people and resources between multiple geographic locations is a challenging visualisation problem. We conducted two quantitative user studies to evaluate different visual representations for such dense many-to-many flows. In our first study we compared a bundled node-link flow map representation and OD Maps [37] with a new visualisation we call MapTrix. Like OD Maps, MapTrix overcomes the clutter associated with a traditional flow map while providing geographic embedding that is missing in standard OD matrix representations. We found that OD Maps and MapTrix had similar performance while bundled node-link flow map representations did not scale at all well. Our second study compared participant performance with OD Maps and MapTrix on larger data sets. Again performance was remarkably similar. |
Human-Scale Virtual Environment for Product Design: Effect of Sensory Substitution | This paper presents a human-scale virtual environment (VE) with haptic feedback along with two experiments performed in the context of product design. The user interacts with a virtual mock-up using a large-scale bimanual string-based haptic interface called SPIDAR (Space Interface Device for Artificial Reality). An original self-calibration method is proposed. A vibro-tactile glove was developed and integrated to the SPIDAR to provide tactile cues to the operator. The purpose of the first experiment was: (1) to examine the effect of tactile feedback in a task involving reach-and-touch of different parts of a digital mock-up, and (2) to investigate the use of sensory substitution in such tasks. The second experiment aimed to investigate the effect of visual and auditory feedback in a car-light maintenance task. Results of the first experiment indicate that the users could easily and quickly access and finely touch the different parts of the digital mock-up when sensory feedback (either visual, auditory, or tactile) was present. Results of the of the second experiment show that visual and auditory feedbacks improve average placement accuracy by about 54 % and 60% respectively compared to the open loop case. Index Terms — Virtual reality, virtual environment, haptic interaction, sensory substitution, human performance. |
Modeling Commonsense Reasoning via Analogical Chaining: A Preliminary Report | Understanding the nature of commonsense reasoning is one of the deepest questions of cognitive science. Prior work has proposed analogy as a mechanism for commonsense reasoning, with prior simulations focusing on reasoning about continuous behavior of physical systems. This paper examines how analogy might be used in commonsense more broadly. The two contributions are (1) the idea of common sense units, intermediate-sized collections of facts extracted from experience (including cultural experience) which improves analogical retrieval and simplifies inferencing, and (2) analogical chaining, where multiple rounds of analogical retrieval and mapping are used to rapidly construct explanations and predictions. We illustrate these ideas via an implemented computational model, tested on examples from an independently-developed test of commonsense reasoning. |
Interferon retreatment of nonresponders with HCV-RNA-Positive chronic hepatitis C | Interferon has been shown to be an effective treatment for some patients with chronic hepatitis C. In this study, the value of retreatment of nonresponders to interferon was investigated. Thirty-eight patients with hepatitis C virus (HCV)-RNA-positive chronic hepatitis C virus (HCV)-RNA-positive chronic hepatitis C who had been treated with betainterferon but still showed an alanine aminotransferase (ALT) level>50 KU upon completion of therapy were retreated with alpha-interferon. Eight patients (21.1%) had normalization of ALT levels after interferon retreatment were studied. Of 16 patients with transient HCV-RNA negativity 1 month after the initial interferon therapy, 7 (43.8%) had a complete response, with normalization of ALT levels and undetectable HCV-RNA, more than 6 months after interferon retreatment. On the other hand, of the 22 patients with HCV-RNA activity 1 month after the initial interferon therapy, only 1 (4.5%) had a complete response. Multivariate analysis, using a multiple logistic model, indicated that a complete response to readministration of interferon was most strongly correlated to transient negative conversion for HCV-RNA after the initial course of treatment. |
Smartphone-based portable ultrasound imaging system: Prototype implementation and evaluation | In this paper, we present a smart US imaging system (SMUS) based on an android-OS smartphone, which can provide maximally optimized efficacy in terms of weight and size in point-of-care diagnostic applications. The proposed SMUS consists of the smartphone (Galaxy S5 LTE-A, Samsung., Korea) and a 16-channel probe system. The probe system contains analog and digital front-ends, which conducts beamforming and mid-processing procedures. Otherwise, the smartphone performs the back-end processing including envelope detection, log compression, 2D image filtering, digital scan conversion, and image display with custom-made graphical user interface (GUI). Note that the probe system and smartphone are interconnected by the USB 3.0 protocol. As a result, the developed SMUS can provide real-time B-mode image with the sufficient frame rate (i.e., 58 fps), battery run-time for point-of-care diagnosis (i.e., 54 min), and 35.0°C of transducer surface temperature during B-mode imaging, which satisfies the temperature standards for the safety and effectiveness of medical electrical equipment, IEC 60601-1 (i.e., 43°C). |
Organic pollutants removal in wastewater by heterogeneous photocatalytic ozonation. | Heterogeneous photocatalysis and ozonation are robust advanced oxidation processes for eliminating organic contaminants in wastewater. The combination of these two methods is carried out in order to enhance the overall mineralization of refractory organics. An apparent synergism between heterogeneous photocatalysis and ozonation has been demonstrated in many literatures, which gives rise to an improvement of total organic carbon removal. The present overview dissects the heterogeneous catalysts and the influences of different operational parameters, followed by the discussion on the kinetics, mechanism, economic feasibility and future trends of this integrated technology. The enhanced oxidation rate mainly results from a large amount of hydroxyl radicals generated from a synergistically induced decomposition of dissolved ozone, besides superoxide ion radicals and the photo-induced holes. Six reaction pathways possibly exist for the generation of hydroxyl radicals in the reaction mechanism of heterogeneous photocatalytic ozonation. |
Humic substances-part 7: the biogeochemistry of dissolved organic carbon and its interactions with climate change. | BACKGROUND, AIM, AND SCOPE
Dissolved organic matter, measured as dissolved organic carbon (DOC), is an important component of aquatic ecosystems and of the global carbon cycle. It is known that changes in DOC quality and quantity are likely to have ecological repercussions. This review has four goals: (1) to discuss potential mechanisms responsible for recent changes in aquatic DOC concentrations; (2) to provide a comprehensive overview of the interactions between DOC, nutrients, and trace metals in mainly boreal environments; (3) to explore the impact of climate change on DOC and the subsequent effects on nutrients and trace metals; and (4) to explore the potential impact of DOC cycling on climate change.
MAIN FEATURES
We review recent research on the mechanisms responsible for recent changes in aquatic DOC concentrations, DOC interactions with trace metals, N, and P, and on the possible impacts of climate change on DOC in mainly boreal lakes. We then speculate on how climate change may affect DOC export and in-lake processing and how these changes might alter nutrient and metal export and processing. Furthermore, the potential impacts of changing DOC cycling patterns on climate change are examined.
RESULTS
It has been noted that DOC concentrations in lake and stream waters have increased during the last 30 years across much of Europe and North America. The potential reasons for this increase include increasing atmospheric CO(2) concentration, climate warming, continued N deposition, decreased sulfate deposition, and hydrological changes due to increased precipitation, droughts, and land use changes. Any change in DOC concentrations and properties in lakes and streams will also impact the acid-base chemistry of these waters and, presumably, the biological, chemical, and photochemical reactions taking place. For example, the interaction of trace metals with DOC may be significantly altered by climate change as organically complexed metals such as Cu, Fe, and Al are released during photo-oxidation of DOC. The production and loss of DOC as CO(2) from boreal lakes may also be affected by changing climate. Climate change is unlikely to be uniform spatially with some regions becoming wetter while others become drier. As a result, rates of change in DOC export and concentrations will vary regionally and the changes may be non-linear.
DISCUSSION
Climate change models predict that higher temperatures are likely to occur over most of the boreal forests in North America, Europe, and Asia over the next century. Climate change is also expected to affect the severity and frequency of storm and drought events. Two general climate scenarios emerge with which to examine possible DOC trends: warmer and wetter or warmer and drier. Increasing temperature and hydrological changes (specifically, runoff) are likely to lead to changes in the quality and quantity of DOC export from terrestrial sources to rivers and lakes as well as changes in DOC processing rates in lakes. This will alter the quality and concentrations of DOC and its constituents as well as its interactions with trace metals and the availability of nutrients. In addition, export rates of nutrients and metals will also change in response to changing runoff. Processing of DOC within lakes may impact climate depending on the extent to which DOC is mineralized to dissolved inorganic carbon (DIC) and evaded to the atmosphere or settles as particulate organic carbon (POC) to bottom sediments and thereby remaining in the lake. The partitioning of DOC between sediments and the atmosphere is a function of pH. Decreased DOC concentrations may also limit the burial of sulfate, as FeS, in lake sediments, thereby contributing acidity to the water by increasing the formation of H(2)S. Under a warmer and drier scenario, if lake water levels fall, previously stored organic sediments may be exposed to greater aeration which would lead to greater CO(2) evasion to the atmosphere. The interaction of trace metals with DOC may be significantly altered by climate change. Iron enhances the formation of POC during irradiation of lake water with UV light and therefore may be an important pathway for transfer of allochthonous DOC to the sediments. Therefore, changing Fe/DOC ratios could affect POC formation rates. If climate change results in altered DOC chemistry (e.g., fewer and/or weaker binding sites) more trace metals could be present in their toxic and bioavailable forms. The availability of nutrients may be significantly altered by climate change. Decreased DOC concentrations in lakes may result in increased Fe colloid formation and co-incident loss of adsorbable P from the water column.
CONCLUSIONS
Climate change expressed as changes in runoff and temperature will likely result in changes in aquatic DOC quality and concentration with concomitant effects on trace metals and nutrients. Changes in the quality and concentration of DOC have implications for acid-base chemistry and for the speciation and bioavailability of certain trace metals and nutrients. Moreover, changes in DOC, metals, and nutrients are likely to drive changes in rates of C evasion and storage in lake sediments.
RECOMMENDATIONS AND PERSPECTIVES
The key controls on allochthonous DOC quality, quantity, and catchment export in response to climate change are still not fully understood. More detailed knowledge of these processes is required so that changes in DOC and its interactions with nutrients and trace metals can be better predicted based on changes caused by changing climate. More studies are needed concerning the effects of trace metals on DOC, the effects of changing DOC quality and quantity on trace metals and nutrients, and how runoff and temperature-related changes in DOC export affect metal and nutrient export to rivers and lakes. |
Asthma and genes encoding components of the vitamin D pathway | BACKGROUND
Genetic variants at the vitamin D receptor (VDR) locus are associated with asthma and atopy. We hypothesized that polymorphisms in other genes of the vitamin D pathway are associated with asthma or atopy.
METHODS
Eleven candidate genes were chosen for this study, five of which code for proteins in the vitamin D metabolism pathway (CYP27A1, CYP27B1, CYP2R1, CYP24A1, GC) and six that are known to be transcriptionally regulated by vitamin D (IL10, IL1RL1, CD28, CD86, IL8, SKIIP). For each gene, we selected a maximally informative set of common SNPs (tagSNPs) using the European-derived (CEU) HapMap dataset. A total of 87 SNPs were genotyped in a French-Canadian family sample ascertained through asthmatic probands (388 nuclear families, 1064 individuals) and evaluated using the Family Based Association Test (FBAT) program. We then sought to replicate the positive findings in four independent samples: two from Western Canada, one from Australia and one from the USA (CAMP).
RESULTS
A number of SNPs in the IL10, CYP24A1, CYP2R1, IL1RL1 and CD86 genes were modestly associated with asthma and atopy (p < 0.05). Two-gene models testing for both main effects and the interaction were then performed using conditional logistic regression. Two-gene models implicating functional variants in the IL10 and VDR genes as well as in the IL10 and IL1RL1 genes were associated with asthma (p < 0.0002). In the replicate samples, SNPs in the IL10 and CYP24A1 genes were again modestly associated with asthma and atopy (p < 0.05). However, the SNPs or the orientation of the risk alleles were different between populations. A two-gene model involving IL10 and VDR was replicated in CAMP, but not in the other populations.
CONCLUSION
A number of genes involved in the vitamin D pathway demonstrate modest levels of association with asthma and atopy. Multilocus models testing genes in the same pathway are potentially more effective to evaluate the risk of asthma, but the effects are not uniform across populations. |
Grower or shower? Predictors of change in penile length from the flaccid to erect state | In colloquial English, a “grower” is a man whose phallus expands significantly in length from the flaccid to the erect state; a “shower” is a man whose phallus does not demonstrate such expansion. We sought to investigate various factors that might predict a man being either a grower or a shower. A retrospective review of 274 patients who underwent penile duplex Doppler ultrasound (PDDU) for erectile dysfunction between 2011 and 2013 was performed. Penile length was measured, both in the flaccid state prior to intracavernosal injection (ICI) of a vasodilating agent (prostaglandin E1), and at peak erection during PDDU. The collected data included patient demographics, vascular, and anatomic parameters. The median change in penile length from flaccid to erect state was 4.0 cm (1.0–7.0), and was used as a cut-off value defining a grower (≥4.0 cm) or a shower (4.0 cm). A total of 73 men (26%) fit the definition of a grower (mean change in length of 5.3 cm [SD 0.5]) and 205 (74%) were showers (mean change in length of 3.1 cm [SD 0.9]). There were no differences between the groups with regards to race, smoking history, co-morbidities, erectile function, flaccid penile length, degree of penile rigidity after ICI, or PDDU findings. Growers were significantly younger (mean age 47.5 vs. 55.9 years, p < 0.001), single (37% vs. 23%, p = 0.031), received less vasodilator dose (10.3 mcg vs. 11.0 mcg, p = 0.038) and had a larger erect phallus (15.5 cm vs. 13.1 cm, p < 0.001). On multivariate analysis, only younger age was significantly predictive of being a grower (p < 0.001). These results suggest that younger age and single status could be predictors of a man being a grower, rather than a shower. Larger, multicultural and multinational studies are needed to confirm these results. |
Graph-based semi-supervised learning for relational networks | We address the problem of semi-supervised learning in relational networks, networks in which nodes are entities and links are the relationships or interactions between them. Typically this problem is confounded with the problem of graph-based semi-supervised learning (GSSL), because both problems represent the data as a graph and predict the missing class labels of nodes. However, not all graphs are created equally. In GSSL a graph is constructed, often from independent data, based on similarity. As such, edges tend to connect instances with the same class label. Relational networks, however, can be more heterogeneous and edges do not always indicate similarity. For instance, instead of links being more likely to connect nodes with the same class label, they may occur more frequently between nodes with different class labels (link-heterogeneity). Or nodes with the same class label do not necessarily have the same type of connectivity across the whole network (class-heterogeneity), e.g. in a network of sexual interactions we may observe links between opposite genders in some parts of the graph and links between the same genders in others. Performing classification in networks with different types of heterogeneity is a hard problem that is made harder still by the fact we do not know a-priori the type or level of heterogeneity. In this work we present two scalable approaches for graph-based semi-supervised learning for the more general case of relational networks. We demonstrate these approaches on synthetic and real-world networks that display different link patterns within and between classes. Compared to state-of-the-art baseline approaches, ours give better classification performance and do so without prior knowledge of how classes interact. In particular, our two-step label propagation algorithm gives consistently good accuracy and precision, while also being highly efficient and can perform classification in networks of over 1.6 million nodes and 30 million edges in around 12 seconds. |
Are leader stereotypes masculine? A meta-analysis of three research paradigms. | This meta-analysis examined the extent to which stereotypes of leaders are culturally masculine. The primary studies fit into 1 of 3 paradigms: (a) In Schein's (1973) think manager-think male paradigm, 40 studies with 51 effect sizes compared the similarity of male and leader stereotypes and the similarity of female and leader stereotypes; (b) in Powell and Butterfield's (1979) agency-communion paradigm, 22 studies with 47 effect sizes compared stereotypes of leaders' agency and communion; and (c) in Shinar's (1975) masculinity-femininity paradigm, 7 studies with 101 effect sizes represented stereotypes of leadership-related occupations on a single masculinity-femininity dimension. Analyses implemented appropriate random and mixed effects models. All 3 paradigms demonstrated overall masculinity of leader stereotypes: (a) In the think manager-think male paradigm, intraclass correlation = .25 for the women-leaders similarity and intraclass correlation = .62 for the men-leaders similarity; (b) in the agency-communion paradigm, g = 1.55, indicating greater agency than communion; and (c) in the masculinity-femininity paradigm, g = 0.92, indicating greater masculinity than the androgynous scale midpoint. Subgroup and meta-regression analyses indicated that this masculine construal of leadership has decreased over time and was greater for male than female research participants. In addition, stereotypes portrayed leaders as less masculine in educational organizations than in other domains and in moderate- than in high-status leader roles. This article considers the relation of these findings to Eagly and Karau's (2002) role congruity theory, which proposed contextual influences on the incongruity between stereotypes of women and leaders. The implications for prejudice against women leaders are also considered. |
HIV-Specific Probabilistic Models of Protein Evolution | Comparative sequence analyses, including such fundamental bioinformatics techniques as similarity searching, sequence alignment and phylogenetic inference, have become a mainstay for researchers studying type 1 Human Immunodeficiency Virus (HIV-1) genome structure and evolution. Implicit in comparative analyses is an underlying model of evolution, and the chosen model can significantly affect the results. In general, evolutionary models describe the probabilities of replacing one amino acid character with another over a period of time. Most widely used evolutionary models for protein sequences have been derived from curated alignments of hundreds of proteins, usually based on mammalian genomes. It is unclear to what extent these empirical models are generalizable to a very different organism, such as HIV-1-the most extensively sequenced organism in existence. We developed a maximum likelihood model fitting procedure to a collection of HIV-1 alignments sampled from different viral genes, and inferred two empirical substitution models, suitable for describing between-and within-host evolution. Our procedure pools the information from multiple sequence alignments, and provided software implementation can be run efficiently in parallel on a computer cluster. We describe how the inferred substitution models can be used to generate scoring matrices suitable for alignment and similarity searches. Our models had a consistently superior fit relative to the best existing models and to parameter-rich data-driven models when benchmarked on independent HIV-1 alignments, demonstrating evolutionary biases in amino-acid substitution that are unique to HIV, and that are not captured by the existing models. The scoring matrices derived from the models showed a marked difference from common amino-acid scoring matrices. The use of an appropriate evolutionary model recovered a known viral transmission history, whereas a poorly chosen model introduced phylogenetic error. We argue that our model derivation procedure is immediately applicable to other organisms with extensive sequence data available, such as Hepatitis C and Influenza A viruses. |
Optimal bounding cones of vectors in three dimensions | The problem of computing the minimum-angle bounding cone of a set of three-dimensional vectors has numero cations in computer graphics and geometric modeling. One such application is bounding the tangents of space cur vectors normal to a surface in the computation of the intersection of two surfaces. No optimal-time exact solution to this problem has been yet given. This paper presents a roadmap for a few strate provide optimal or near-optimal (time-wise) solutions to this problem, which are also simple to implement. Specifica worst-case running time is required, we provide an O ( logn)-time Voronoi-diagram-based algorithm, where n is the number of vectors whose optimum bounding cone is sought. Otherwise, i f one is willing to accept an, in average, efficient algorithm, we show that the main ingredient of the algorithm of Shirman and Abi-Ezzi [Comput. Graphics Forum 12 (1993) 261–272 implemented to run in optimal (n) expected time. Furthermore, if the vectors (as points on the sphere of directions) are to occupy no more than a hemisphere, we show how to simplify this ingredient (by reducing the dimension of the p without affecting the asymptotic expected running time. Both versions of this algorithm are based on computing (as an problem) the minimum spanning circle (respectively, ball) of a two-dimensional (respectively, three-dimensional) set o 2004 Elsevier B.V. All rights reserved. |
Very low-voltage fully differential amplifier for switched-capacitor applications | A fully differential opamp suitable for very-low voltage switched-capacitor circuits in standard CMOS technologies is introduced. The proposed two stage opamp needs a simple low voltage CMFB switched-capacitor circuit only for the second stage. Due to the reduced supply voltage, the CMFB circuit is implemented using boot-strapped switches. Minor modifications allow to use chop-per stabilization for flicker noise reduction. Two different compensation schemes are discussed and compared using an example for 1V operation of the amplifier. |
Extracting Parallel Sentences from Comparable Corpora using Document Level Alignment | The quality of a statistical machine translation (SMT) system is heavily dependent upon the amount of parallel sentences used in training. In recent years, there have been several approaches developed for obtaining parallel sentences from non-parallel, or comparable data, such as news articles published within the same time period (Munteanu and Marcu, 2005), or web pages with a similar structure (Resnik and Smith, 2003). One resource not yet thoroughly explored is Wikipedia, an online encyclopedia containing linked articles in many languages. We advance the state of the art in parallel sentence extraction by modeling the document level alignment, motivated by the observation that parallel sentence pairs are often found in close proximity. We also include features which make use of the additional annotation given by Wikipedia, and features using an automatically induced lexicon model. Results for both accuracy in sentence extraction and downstream improvement in an SMT system are presented. |
A Primer on Data-Driven Gamification Design | Gamification gradually gains more attention. However, gamification and its successful application is still unclear. There is a lack of insights and theory on the relationships between game design elements, motivation, domain context and user behavior. We want to discover the potentials of data-driven optimization of gamification design, e.g. by the application of machine learning techniques on user interaction data. Therefore, we propose data-driven gamification design (DDGD) and conducted a questionnaire with 17 gamification experts. Our results show that respondents regard DDGD as a promising method to improve gamification design and lead to a general definition for DDGD. |
Gratitude and subjective well-being in early adolescence: examining gender differences. | Gratitude was examined among 154 students to identify benefits from its experience and expression. Students completed measures of subjective well-being, social support, prosocial behavior, and physical symptoms. Positive associations were found between gratitude and positive affect, global and domain specific life satisfaction, optimism, social support, and prosocial behavior; most relations remained even after controlling for positive affect. Gratitude demonstrated a negative relation with physical symptoms, but not with negative affect. Relational fulfillment mediated the relation between gratitude and physical symptoms. Gratitude demonstrated strong relations with the following positive affects: proud, hopeful, inspired, forgiving, and excited. The relation between gratitude and family support was moderated by gender, indicating that boys, compared with girls, appear to derive more social benefits from gratitude. Strengths, limitations, and implications are discussed. |
Clustering based multi-label classification for image annotation and retrieval | This paper presents a novel multi-label classification framework for domains with large numbers of labels. Automatic image annotation is such a domain, as the available semantic concepts are typically hundreds. The proposed framework comprises an initial clustering phase that breaks the original training set into several disjoint clusters of data. It then trains a multi-label classifier from the data of each cluster. Given a new test instance, the framework first finds the nearest cluster and then applies the corresponding model. Empirical results using two clustering algorithms, four multi-label classification algorithms and three image annotation data sets suggest that the proposed approach can improve the performance and reduce the training time of standard multi-label classification algorithms, particularly in the case of large number of labels. |
Fast Fourier Transforms: for fun and profit | The "Fast Fourier Transform" has now been widely known for about a year. During that time it has had a major effect on several areas of computing, the most striking example being techniques of numerical convolution, which have been completely revolutionized. What exactly is the "Fast Fourier Transform"? |
Graph Convolutional Matrix Completion | We consider matrix completion for recommender systems from the point of view of link prediction on graphs. Interaction data such as movie ratings can be represented by a bipartite user-item graph with labeled edges denoting observed ratings. This representation is especially useful in the setting where additional graph-based side information is present. Building on recent progress in deep learning on graph-structured data, we propose a graph auto-encoder framework based on differentiable message passing on the bipartite interaction graph. In settings where complimentary feature information or structured data such as a social network is available, our framework outperforms recent state-of-the-art methods. Furthermore, to validate the proposed message passing scheme, we test our model on standard collaborative filtering tasks and show competitive results. |
The inhibition of malate dehydrogenase by chlorammine-platinum complexes. | Abstract Equilibrium association constants have been calculated for various platinum (II) and platinum (IV) complexes. The association constants were greatest for the dinegatively charged state, regardless of the valence state of the platinum, and the constant decreased considerably as the charge increased. There were no measurable values for positively charged complex states. The conclusion is that the electrostatic charge of the platinum complex is the most important factor causing the inhibition of the enzyme, and the steric differences have only a minor effect, while geometric variation as in the comparison of cis and trans dichlorodiamine-platinum (II) isomers yields no differences in inhibition. |
Peer-to-Peer Lending – A Literature Review | The term online peer-to-peer lending (P2P) describes the loan origination process between private individuals on online platforms were financial institutions operate only as intermediates required by law. Initialized by groups in online social networks, first commercial online P2P lending platforms started in 2005. Thus online P2P lending is a relatively young research field. This paper gives a brief overview of the P2P lending market and reviews the research on the determinants of P2P lending. We distinguish between financial and demographic characteristics of the borrower, as well as social characteristics like friends and group affiliation. The reviewed literature gives insights on how the determinants affect the borrowers’ likelihood of successful funding, the final interest rate that has to be paid as well as the relationship of the borrowers’ characteristics and lending success. |
Critical factors of WAP services adoption: an empirical study | Mobile commerce is becoming increasingly important in business. This trend is particularly evident in the service industry. To cope with this demand, various platforms have been proposed to provide effective mobile commerce solutions. Among these solutions, wireless application protocol (WAP) is one of the most widespread technical standards for mobile commerce. Following continuous technical evolution, WAP has come to include various new features. However, WAP services continue to struggle for market share. Hence, understanding WAP service adoption is increasingly important for enterprises interested in developing mobile commerce. This study aims to (1) identify the critical factors of WAP service adoption; (2) explore the relative importance of each factor for users who adopt WAP and those who do not; (3) examine the causal relationships among variables on WAP service adoption behavior. This study conducts an empirical test of WAP service adoption in Taiwan, based on theory of planned behavior (TPB) and innovation diffusion theory (IDT). The results help clarify the critical factors influences on WAP service adoption in the Greater China economic region. The Greater China economic region is a rapidly growing market. Many western telecommunication enterprises are strongly interested in providing wireless services in Shanghai, Singapore, Hong Kong and Taipei. Since these cities share a similar culture and the same language, the analytical results and conclusions of this study may be a good reference for global telecommunication enterprises to establish the developing strategy for their eastern branches. From the analysis conducted in this study, the critical factors for influences on WAP service adoption include connection speed, service cost, user satisfaction, personal innovativeness, ease of use, peer influence, and facilitating condition. Therefore, this study proposes that strategies for marketing WAP services in the Greater China economic region can pay increased attention to these factors. Notably, this study also provides some suggestion for subsequent researchers and practitioners seeking to understand WAP service adoption behavior. |
Potential of crude seed extract of celery, Apium graveolens L., against the mosquito Aedes aegypti (L.) (Diptera: Culicidae). | Crude seed extract of celery, Apium graveolens, was investigated for anti-mosquito potential, including larvicidal, adulticidal, and repellent activities against Aedes aegypti, the vector of dengue haemorrhagic fever. The ethanol-extracted A. graveolens possessed larvicidal activity against fourth instar larvae of Ae. aegypti with LD50 and LD95 values of 81.0 and 176.8 mg/L, respectively. The abnormal movement observed in treated larvae indicated that the toxic effect of A. graveolens extract was probably on the nervous system. In testing for adulticidal activity, this plant extract exhibited a slightly adulticidal potency with LD50 and LD95 values of 6.6 and 66.4 mg/cm2, respectively. It showed repellency against Ae. aegypti adult females with ED50 and ED95 values of 2.03 and 28.12 mg/cm2, respectively. It also provided biting protection time of 3 h when applied at a concentration of 25 g%. Topical application of the ethanol-extracted A. graveolens did not induce dermal irritation. No adverse effects on the skin or other parts of the body of human volunteers were observed during 3 mo of the study period or in the following 3 mo, after which time observations ceased. A. graveolens, therefore, can be considered as a probable source of some biologically active compounds used in the development of mosquito control agents, particularly repellent products. |
On the Validity of a New SMS Spam Collection | Mobile phones are becoming the latest target of electronic junk mail. Recent reports clearly indicate that the volume of SMS spam messages are dramatically increasing year by year. Probably, one of the major concerns in academic settings was the scarcity of public SMS spam datasets, that are sorely needed for validation and comparison of different classifiers. To address this issue, we have recently proposed a new SMS Spam Collection that, to the best of our knowledge, is the largest, public and real SMS dataset available for academic studies. However, as it has been created by augmenting a previously existing database built using roughly the same sources, it is sensible to certify that there are no duplicates coming from them. So, in this paper we offer a comprehensive analysis of the new SMS Spam Collection in order to ensure that this does not happen, since it may ease the task of learning SMS spam classifiers and, hence, it could compromise the evaluation of methods. The analysis of results indicate that the procedure followed does not lead to near-duplicates and, consequently, the proposed dataset is reliable to use for evaluating and comparing the performance achieved by different classifiers. |
A High-Performance SPWM Controller for Three-Phase UPS Systems Operating Under Highly Nonlinear Loads | This paper presents the design of a high-performance sinusoidal pulsewidth modulation (SPWM) controller for three-phase uninterruptible power supply (UPS) systems that are operating under highly nonlinear loads. The classical SPWM method is quite effective in controlling the RMS magnitude of the UPS output voltages. However, it is not good enough in compensating the harmonics and the distortion caused specifically by the nonlinear currents drawn by the rectifier loads. The distortion becomes more severe at high power where the switching frequency has to be reduced due to the efficiency concerns. This study proposes a new design strategy that overcomes the limitations of the classical RMS control. It adds inner loops to the closed-loop control system effectively that enables successful reduction of harmonics and compensation of distortion at the outputs. Simulink is used to analyze, develop, and design the controller using the state-space model of the inverter. The controller is implemented in the TMS320F2808 DSP by Texas Instruments, and the performance is evaluated experimentally using a three-phase 10 kVA transformer isolated UPS under all types of load conditions. In conclusion, the experimental results demonstrate that the controller successfully achieves the steady-state RMS voltage regulation specifications as well as the total harmonic distortion and the dynamic response requirements of major UPS standards. |
6.5 kV Si/SiC hybrid power module: An ideal next step? | Silicon carbide (SiC) power switches such as JFET or MOSFET have demonstrated their superior advantages over silicon (Si) power devices such as IGBT, especially in terms of significantly reduced switching losses. A major issue facing large scale adoption of SiC power devices is still the much higher cost. This paper proposes that Si/SiC hybrid power module (HPM) should be a natural next step moving forward for high voltage applications to address the cost issue. In the proposed Si/SiC HPM, a SiC JFET is connected in parallel with Si IGBT to combine the advantages of both IGBT and JFET. A 6.5 kV HPM is developed based on Si IGBT and SiC JFET as an example to demonstrate its superior cost/performance. The switching loss can be reduced by more than 70% at a cost of about 70% higher compared to Si IGBT. This work is especially essential for high voltage applications such as medium voltage motor drive, FACTS and HVDC systems. |
Remote Sensing Image Scene Classification Using Bag of Convolutional Features | More recently, remote sensing image classification has been moving from pixel-level interpretation to scene-level semantic understanding, which aims to label each scene image with a specific semantic class. While significant efforts have been made in developing various methods for remote sensing image scene classification, most of them rely on handcrafted features. In this letter, we propose a novel feature representation method for scene classification, named bag of convolutional features (BoCF). Different from the traditional bag of visual words-based methods in which the visual words are usually obtained by using handcrafted feature descriptors, the proposed BoCF generates visual words from deep convolutional features using off-the-shelf convolutional neural networks. Extensive evaluations on a publicly available remote sensing image scene classification benchmark and comparison with the state-of-the-art methods demonstrate the effectiveness of the proposed BoCF method for remote sensing image scene classification. |
Online Deep Learning: Learning Deep Neural Networks on the Fly | Deep Neural Networks (DNNs) are typically trained by backpropagation in a batch learning setting, which requires the entire training data to be made available prior to the learning task. This is not scalable for many real-world scenarios where new data arrives sequentially in a stream form. We aim to address an open challenge of “Online Deep Learning” (ODL) for learning DNNs on the fly in an online setting. Unlike traditional online learning that often optimizes some convex objective function with respect to a shallow model (e.g., a linear/kernel-based hypothesis), ODL is significantly more challenging since the optimization of the DNN objective function is non-convex, and regular backpropagation does not work well in practice, especially for online learning settings. In this paper, we present a new online deep learning framework that attempts to tackle the challenges by learning DNN models of adaptive depth from a sequence of training data in an online learning setting. In particular, we propose a novel Hedge Backpropagation (HBP) method for online updating the parameters of DNN effectively, and validate the efficacy of our method on large-scale data sets, including both stationary and concept drifting scenarios. |
Lenselet image compression scheme based on subaperture images streaming | Plenoptic cameras capture the light field in a scene with a single shot and produce lenselet images. From a lenselet image, light field can be reconstructed, with which we can render images with different viewpoints and focal length. Because of large volume data, high efficient image compression scheme for storage and transmission is urgent. Containing 4D light field information, lenselet images have much more redundant information than traditional 2D images. In this paper, we propose a subaperture images streaming scheme to compress lenselet images, in which rotation scan mapping is adopted to further improve compression efficiency. The experiment results show our approach can efficient compress the redundancy in lenselet images and outperform traditional image compression method. |
A Geometric Approach to Lower Bounds for Approximate Near-Neighbor Search and Partial Match | This work investigates a geometric approach to proving cell probe lower bounds for data structure problems.We consider the {\em approximate nearest neighbor search problem} on the Boolean hypercube $(\bool^d,\onenorm{\cdot})$ with $d=\Theta(\log n)$. We show that any (randomized) data structure for the problem that answers $c$-approximate nearest neighbor search queries using $t$ probes must use space at least $n^{1+\Omega(1/ct)}$. In particular, our bound implies that any data structure that uses space $\tilde{O}(n)$ with polylogarithmic word size, and with constant probability gives a constant approximation to nearest neighbor search queries must be probed $\Omega(\log n/ \log\log n)$ times. This improves on the lower bound of $\Omega(\log\log d/\log\log\log d)$ probes shown by Chakrabarti and Regev~\cite{ChakrabartiR04} for any polynomial space data structure, and the $\Omega(\log\log d)$ lower bound in \Patrascu and Thorup~\cite{PatrascuT07} for linear space data structures.Our lower bound holds for the {\em near neighbor problem}, where the algorithm knows in advance a good approximation to the distance to the nearest neighbor.Additionally, it is an {\em average case} lower bound for the natural distribution for the problem. Our approach also gives the same bound for $(2-\frac{1}{c})$-approximation to the farthest neighbor problem.For the case of non-adaptive algorithms we can improve the bound slightly and show a $\Omega(\log n)$ lower bound on the time complexity of data structures with $O(n)$ space and logarithmic word size.We also show similar lower bounds for the partial match problem: any randomized $t$-probe data structure that solves the partial match problem on $\{0,1,\star\}^d$ for $d=\Theta(\log n)$ must use space $n^{1+\Omega(1/t)}$. This implies an $\Omega(\log n/\log\log n)$ lower bound for time complexity of near linear space data structures, slightly improving the $\Omega(\log n /(\log \log n)^2)$ lower bound from~\cite{PatrascuT06a},\cite{JayramKKR03} for this range of $d$. Recently and independently \Patrascu achieved similar bounds \cite{patrascu08}. Our results also generalize to approximate partial match, improving on the bounds of \cite{BarkolR02,PatrascuT06a}. |
Deep Chain HDRI: Reconstructing a High Dynamic Range Image from a Single Low Dynamic Range Image | Recently, high dynamic range (HDR) imaging has attracted much attention as a technology to reflect human visual characteristics owing to the development of the display and camera technology. This paper proposes a novel deep neural network model that reconstructs an HDR image from a single low dynamic range (LDR) image. The proposed model is based on a convolutional neural network composed of dilated convolutional layers and infers LDR images with various exposures and illumination from a single LDR image of the same scene. Then, the final HDR image can be formed by merging these inference results. It is relatively simple for the proposed method to find the mapping between the LDR and an HDR with a different bit depth because of the chaining structure inferring the relationship between the LDR images with brighter (or darker) exposures from a given LDR image. The method not only extends the range but also has the advantage of restoring the light information of the actual physical world. The proposed method is an end-to-end reconstruction process, and it has the advantage of being able to easily combine a network to extend an additional range. In the experimental results, the proposed method shows quantitative and qualitative improvement in performance, compared with the conventional algorithms. |
Fat Injection: A Systematic Review of Injection Volumes by Facial Subunit | Fat grafting to the aging face has become an integral component of esthetic surgery. However, the amount of fat to inject to each area of the face is not standardized and has been based mainly on the surgeon’s experience. The purpose of this study was to perform a systematic review of injected fat volume to different facial zones. A systematic review of the literature was performed through a MEDLINE search using keywords “facial,” “fat grafting,” “lipofilling,” “Coleman technique,” “autologous fat transfer,” and “structural fat grafting.” Articles were then sorted by facial subunit and analyzed for: author(s), year of publication, study design, sample size, donor site, fat preparation technique, average and range of volume injected, time to follow-up, percentage of volume retention, and complications. Descriptive statistics were performed. Nineteen articles involving a total of 510 patients were included. Rhytidectomy was the most common procedure performed concurrently with fat injection. The mean volume of fat injected to the forehead is 6.5 mL (range 4.0–10.0 mL); to the glabellar region 1.4 mL (range 1.0–4.0 mL); to the temple 5.9 mL per side (range 2.0–10.0 mL); to the eyebrow 5.5 mL per side; to the upper eyelid 1.7 mL per side (range 1.5–2.5 mL); to the tear trough 0.65 mL per side (range 0.3–1.0 mL); to the infraorbital area (infraorbital rim to lower lid/cheek junction) 1.4 mL per side (range 0.9–3.0 mL); to the midface 1.4 mL per side (range 1.0–4.0 mL); to the nasolabial fold 2.8 mL per side (range 1.0–7.5 mL); to the mandibular area 11.5 mL per side (range 4.0–27.0 mL); and to the chin 6.7 mL (range 1.0–20.0 mL). Data on exactly how much fat to inject to each area of the face in facial fat grafting are currently limited and vary widely based on different methods and anatomical terms used. This review offers the ranges and the averages for the injected volume in each zone. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266. |
Hayek's Ricardo Effect: A Second Look | I Introduction In this article we review a long-standing controversy in twentieth-century economic thought: the debate over Hayek's Ricardo effect. Hayek developed his interpretation of the Ricardo effect in the context of his theory of business cycles primarily in the late 1930s and early 1940s. At that time Hayek's use of the familiar proposition was held up to close scrutiny by many talented critics, including Nicholas Kaldor, H. D. Dickinson, and R. G. Hawtry, all of whom, according to Hayek, failed to understand what he was saying. (1) In each case Hayek countered their criticism with a restatement of his proposition which in turn failed to satisfy the critics. (2) Indeed, this apparent miscommunication was still taking place as late as 1969 when Hayek wrote his last piece on the Ricardo effect in answer to a criticism leveled by Sir John Hicks. The purpose of our article is to discover whether in this debate it was Hayek who was out of step with the profession or the profession that was out of step with Hayek. We shall argue that the reason that Hayek's argument seemed so elusive to his contemporary colleagues (and some later critics) was that his method of analysis was foreign to their way of thinking. While English economists in the 1930s (especially after Keynes published the General theory) were concerned primarily with the static problem of balancing income and expenditure flows at acceptable levels of employment, Hayek was attempting to develop a dynamic theory of business cycles that involved tracing out the path of adjustment of the capital stock of an economy from one equilibrium state to another. This problem of describing the transitional process only became the subject of serious professional attention long after the debate over the Ricardo effect was concluded. (3) In the course of our exposition, we will demonstrate that Hayek's Ricardo effect was not, in Blaug's words, "only another instance of the vice of neoclassical economics: the hasty application of static theorems to the real world." (4) Indeed, we shall argue that this is the last accusation one could logically hurl at Hayek's analysis. The very reason why Hayek encountered so much difficulty in communicating his message was precisely that he was not presenting an exercise in comparative statics but was rather hypothesizing a particular adjustment process where the final equilibrium state depended upon the particular path of adjustment followed in the economy. In recent years the founders of the 'new classical economics' have praised the broad outlines of Hayek's approach to the business cycle. Robert Lucas pointed out that as early as 1929 Hayek articulated what remains today the single most important theoretical question in business cycle research. Hayek asked, How can cyclical phenomena be incorporated "into the system of economic equilibrium theory?" (5) Lucas goes on to regret the unfortunate Keynesian diversion of research effort from a thoroughgoing theory of the business cycle to what, in Lucas's words, is the "simpler question of the determination of output at a point in time" (6) It is well known that Hayek also regretted the unfortunate "Keynesian diversion" and resisted the redirection of research away from what he considered to be one of the most important macroeconomic questions: how production structures adjust to the underlying demand conditions and savings patterns of the community. It was this concern that made him wary of an economic theory that made it appear possible to push an economy into a perpetual state of boom. (7) He feared that instead of perpetual boom, inflationist policies would only result in short-term, illusory gains followed by a collapse with all of its undesirable macroeconomic effects. Hayck's approach (much like the modern approach of Lucas and others) was to derive macroeconomic consequences from an analysis of the self-interested behavior of market participants. … |
An open annotation ontology for science on web 3.0 | BACKGROUND
There is currently a gap between the rich and expressive collection of published biomedical ontologies, and the natural language expression of biomedical papers consumed on a daily basis by scientific researchers. The purpose of this paper is to provide an open, shareable structure for dynamic integration of biomedical domain ontologies with the scientific document, in the form of an Annotation Ontology (AO), thus closing this gap and enabling application of formal biomedical ontologies directly to the literature as it emerges.
METHODS
Initial requirements for AO were elicited by analysis of integration needs between biomedical web communities, and of needs for representing and integrating results of biomedical text mining. Analysis of strengths and weaknesses of previous efforts in this area was also performed. A series of increasingly refined annotation tools were then developed along with a metadata model in OWL, and deployed for feedback and additional requirements the ontology to users at a major pharmaceutical company and a major academic center. Further requirements and critiques of the model were also elicited through discussions with many colleagues and incorporated into the work.
RESULTS
This paper presents Annotation Ontology (AO), an open ontology in OWL-DL for annotating scientific documents on the web. AO supports both human and algorithmic content annotation. It enables "stand-off" or independent metadata anchored to specific positions in a web document by any one of several methods. In AO, the document may be annotated but is not required to be under update control of the annotator. AO contains a provenance model to support versioning, and a set model for specifying groups and containers of annotation. AO is freely available under open source license at http://purl.org/ao/, and extensive documentation including screencasts is available on AO's Google Code page: http://code.google.com/p/annotation-ontology/ .
CONCLUSIONS
The Annotation Ontology meets critical requirements for an open, freely shareable model in OWL, of annotation metadata created against scientific documents on the Web. We believe AO can become a very useful common model for annotation metadata on Web documents, and will enable biomedical domain ontologies to be used quite widely to annotate the scientific literature. Potential collaborators and those with new relevant use cases are invited to contact the authors. |
Stereotactic treatment of intracerebral hematoma by means of a plasminogen activator: a multicenter randomized controlled trial (SICHPA). | BACKGROUND AND PURPOSE
Treatment of intracerebral hematoma (ICH) is controversial. An advantage of neurosurgical intervention over conservative treatment of ICH has not been established. Recent reports suggest a favorable effect of stereotactic blood clot removal after liquefaction by means of a plasminogen activator. The SICHPA trial was aimed at investigating the efficacy of this treatment.
METHODS
A stereotactically placed catheter was used to instill urokinase to liquefy and drain the ICH in 6-hour intervals over 48 hours. From 1996 to 1999, 13 centers entered 71 patients into the study. Patients were randomized into a surgical group (n=36) and a nonsurgical group (n=35). Admission criteria were the following: age >45 years, spontaneous supratentorial ICH, Glasgow Eye Motor score ranging from 2 to 10, ICH volume >10 cm3, and treatment within 72 hours. The primary end point was death at 6 months. As secondary end points, ICH volume reduction and overall outcome measured by the modified Rankin scale were chosen. The trial was prematurely stopped as a result of slow patient accrual.
RESULTS
Seventy patients were analyzed. Overall mortality at day 180 after stroke was 57%; this included 20 of 36 patients (56%) in the surgical group and 20 of 34 patients (59%) in the nonsurgical group. A significant ICH volume reduction was achieved by the intervention (10% to 20%, P<0.05). Logistic regression analysis indicated the possibility of efficacy for surgical treatment (odds ratio, 0.23; 95% confidence interval, 0.05 to 1.20; P=0.08). The odds ratio of mortality combined with modified Rankin scale score 5 at 180 days was also not statistically significant (odds ratio, 0.52; 95% confidence interval, 1.2 to 2.3; P=0.38).
CONCLUSIONS
Stereotactic aspiration can be performed safely and in a relatively uniform manner; it leads to a modest reduction of 18 mL of hematoma reduction over 7 days when compared with control, which has a 7-mL reduction, and therefore may improve prognosis. |
Lessons learned in the conduct of a global, large simple trial of treatments indicated for schizophrenia. | Large, "practical" or streamlined trials (LSTs) are used to study the effectiveness and/or safety of medicines in real world settings with minimal study imposed interventions. While LSTs have benefits over traditional randomized clinical trials and observational studies, there are inherent challenges to their conduct. Enrollment and follow-up of a large study sample of patients with mental illness pose a particular difficulty. To assist in overcoming operational barriers in future LSTs in psychiatry, this paper describes the recruitment and observational follow-up strategies used for the ZODIAC study, an international, open-label LST, which followed 18,239 persons randomly assigned to one of two treatments indicated for schizophrenia for 1 year. ZODIAC enrolled patients in 18 countries in North America, South America, Europe, and Asia using broad study entry criteria and required minimal clinical care intervention. Recruitment of adequate numbers and continued engagement of both study centers and subjects were significant challenges. Strategies implemented to mitigate these in ZODIAC include global study expansion, study branding, field coordinator and site relations programs, monthly site newsletters, collection of alternate contact information, conduct of national death index (NDI) searches, and frequent sponsor, contract research organization (CRO) and site interaction to share best practices and address recruitment challenges quickly. We conclude that conduct of large LSTs in psychiatric patient populations is feasible, but importantly, realistic site recruitment goals and maintaining site engagement are key factors that need to be considered in early study planning and conduct. |
On the origin of the notion of GW et cetera | The notion of gravitational wave (GW) came forth originally as a by-product of the linear approximation of general relativity (GR). Now, it can be proved that this approximation is quite inadequate to a proper study of the hypothetic GW's. The significant role of the approximations beyond the linear stage is emphasized. |
On Evolution, Search, Optimization, Genetic Algorithms and Martial Arts towards Memetic Algorithms | Short abstract, isn't it? |
Extracting deep bottleneck features using stacked auto-encoders | In this work, a novel training scheme for generating bottleneck features from deep neural networks is proposed. A stack of denoising auto-encoders is first trained in a layer-wise, unsupervised manner. Afterwards, the bottleneck layer and an additional layer are added and the whole network is fine-tuned to predict target phoneme states. We perform experiments on a Cantonese conversational telephone speech corpus and find that increasing the number of auto-encoders in the network produces more useful features, but requires pre-training, especially when little training data is available. Using more unlabeled data for pre-training only yields additional gains. Evaluations on larger datasets and on different system setups demonstrate the general applicability of our approach. In terms of word error rate, relative improvements of 9.2% (Cantonese, ML training), 9.3% (Tagalog, BMMI-SAT training), 12% (Tagalog, confusion network combinations with MFCCs), and 8.7% (Switchboard) are achieved. |
Neural correlates of gratitude | Gratitude is an important aspect of human sociality, and is valued by religions and moral philosophies. It has been established that gratitude leads to benefits for both mental health and interpersonal relationships. It is thus important to elucidate the neurobiological correlates of gratitude, which are only now beginning to be investigated. To this end, we conducted an experiment during which we induced gratitude in participants while they underwent functional magnetic resonance imaging. We hypothesized that gratitude ratings would correlate with activity in brain regions associated with moral cognition, value judgment and theory of mind. The stimuli used to elicit gratitude were drawn from stories of survivors of the Holocaust, as many survivors report being sheltered by strangers or receiving lifesaving food and clothing, and having strong feelings of gratitude for such gifts. The participants were asked to place themselves in the context of the Holocaust and imagine what their own experience would feel like if they received such gifts. For each gift, they rated how grateful they felt. The results revealed that ratings of gratitude correlated with brain activity in the anterior cingulate cortex and medial prefrontal cortex, in support of our hypotheses. The results provide a window into the brain circuitry for moral cognition and positive emotion that accompanies the experience of benefitting from the goodwill of others. |
Language Processing Value Network Natural Language Goal G State S t Value for actions " Go to the blue torch " Vision Processing Gated Attention Goal Embedding Attention Text Image Representation | Sparse reward is one of the most challenging problems in reinforcement learning (RL). Hindsight Experience Replay (HER) attempts to address this issue by converting a failure experience to a successful one by relabeling the goals. Despite its effectiveness, HER has limited applicability because it lacks a compact and universal goal representation. We present Augmenting experienCe via TeacheR’s adviCE (ACTRCE), an efficient reinforcement learning technique that extends the HER framework using natural language as the goal representation. We first analyze the differences among goal representation, and show that ACTRCE can efficiently solve difficult reinforcement learning problems in challenging 3D navigation tasks, whereas HER with non-language goal representation failed to learn. We also show that with language goal representations, the agent can generalize to unseen instructions, and even generalize to instructions with unseen lexicons. We further demonstrate it is crucial to use hindsight advice to solve challenging tasks, but we also found that little amount of hindsight advice is sufficient for the learning to take off, showing the practical aspect of the method. |
A Generative Model of People in Clothing | We present the first image-based generative model of people in clothing for the full body. We sidestep the commonly used complex graphics rendering pipeline and the need for high-quality 3D scans of dressed people. Instead, we learn generative models from a large image database. The main challenge is to cope with the high variance in human pose, shape and appearance. For this reason, pure image-based approaches have not been considered so far. We show that this challenge can be overcome by splitting the generating process in two parts. First, we learn to generate a semantic segmentation of the body and clothing. Second, we learn a conditional model on the resulting segments that creates realistic images. The full model is differentiable and can be conditioned on pose, shape or color. The result are samples of people in different clothing items and styles. The proposed model can generate entirely new people with realistic clothing. In several experiments we present encouraging results that suggest an entirely data-driven approach to people generation is possible. |
Generating Synthetic Data for Text Recognition | Generating synthetic images is an art which emulates the natural process of image generation in a closest possible manner. In this work, we exploit such a framework for data generation in handwritten domain. We render synthetic data using open source fonts and incorporate data augmentation schemes. As part of this work, we release 9M synthetic handwritten word image corpus which could be useful for training deep network architectures and advancing the performance in handwritten word spotting and recognition tasks. |
A survey on measuring indirect discrimination in machine learning | Nowadays, many decisions are made using predictive models built on historical data. Predictive models may systematically discriminate groups of people even if the computing process is fair and well-intentioned. Discrimination-aware data mining studies how to make predictive models free from discrimination, when historical data, on which they are built, may be biased, incomplete, or even contain past discriminatory decisions. Discrimination refers to disadvantageous treatment of a person based on belonging to a category rather than on individual merit. In this survey we review and organize various discrimination measures that have been used for measuring discrimination in data, as well as in evaluating performance of discrimination-aware predictive models. We also discuss related measures from other disciplines, which have not been used for measuring discrimination, but potentially could be suitable for this purpose. We computationally analyze properties of selected measures. We also review and discuss measuring procedures, and present recommendations for practitioners. The primary target audience is data mining, machine learning, pattern recognition , statistical modeling researchers developing new methods for non-discriminatory predictive modeling. In addition, practitioners and policy makers would use the survey for diagnosing potential discrimination by predictive models. |
R-CNN for Small Object Detection | Existing object detection literature focuses on detecting a big object covering a large part of an image. The problem of detecting a small object covering a small part of an image is largely ignored. As a result, the state-of-the-art object detection algorithm renders unsatisfactory performance as applied to detect small objects in images. In this paper, we dedicate an effort to bridge the gap. We first compose a benchmark dataset tailored for the small object detection problem to better evaluate the small object detection performance. We then augment the state-of-the-art R-CNN algorithm with a context model and a small region proposal generator to improve the small object detection performance. We conduct extensive experimental validations for studying various design choices. Experiment results show that the augmented R-CNN algorithm improves the mean average precision by 29.8% over the original R-CNN algorithm on detecting small objects. ACCV This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission of Mitsubishi Electric Research Laboratories, Inc.; an acknowledgment of the authors and individual contributions to the work; and all applicable portions of the copyright notice. Copying, reproduction, or republishing for any other purpose shall require a license with payment of fee to Mitsubishi Electric Research Laboratories, Inc. All rights reserved. Copyright c © Mitsubishi Electric Research Laboratories, Inc., 2016 201 Broadway, Cambridge, Massachusetts 02139 |
Seamless Texture Atlases | Texture atlas parameterization provides an effective way to map a variety of color and data attributes from 2D texture domains onto polygonal surface meshes. However, the individual charts of such atlases are typically plagued by noticeable seams. We describe a new type of atlas which is seamless by construction. Our seamless atlas comprises all quadrilateral charts, and permits seamless texturing, as well as per-fragment down-sampling on rendering hardware and polygon simplification. We demonstrate the use of this atlas for capturing appearance attributes and producing seamless renderings. |
On the location dependence of convolutional neural network features | As the availability of geotagged imagery has increased, so has the interest in geolocation-related computer vision applications, ranging from wide-area image geolocalization to the extraction of environmental data from social network imagery. Encouraged by the recent success of deep convolutional networks for learning high-level features, we investigate the usefulness of deep learned features for such problems. We compare features extracted from various layers of convolutional neural networks and analyze their discriminative ability with regards to location. Our analysis spans several problem settings, including region identification, visualizing land cover in aerial imagery, and ground-image localization in regions without ground-image reference data (where we achieve state-of-the-art performance on a benchmark dataset). We present results on multiple datasets, including a new dataset we introduce containing hundreds of thousands of ground-level and aerial images in a large region centered around San Francisco. |
Virtual Assembly Analysis Tool and Architecture for e-Design and Realization Environment | Many customers are no longer satisfied with mass-produced goods. They are demanding customization and rapid delivery of innovative products. Many companies are now realizing that the best way to reduce life cycle costs is to evolve a more effective product development paradigm using Internet and web based technologies. Yet there remains a gap between current market demands and product development paradigms. The existing CAD systems require that product developers possess all the design analysis tools in-house making it impractical to employ all the needed and newest tools. Hence, this paper addresses how assembly operation analysis can be embedded transparently and remotely into a service-oriented collaborative assembly design environment. A new assembly operation analysis framework is introduced and a relevant architecture and tools are developed to realize the framework. Instead of the current sequential process for verifying and validating an assembly design, a new Virtual Assembly Analysis (VAA) method is introduced in the paper to predict the various effects of joining during actual collaborative design. As a case study, arc welding and riveting processes are investigated. New service-oriented VAA architecture and its VAA components are proposed and implemented on prototype mechanical assemblies. |
Large residual multiple view 3D CNN for false positive reduction in pulmonary nodule detection | Pulmonary nodules detection play a significant role in the early detection and treatment of lung cancer. False positive reduction is the one of the major parts of pulmonary nodules detection systems. In this study a novel method aimed at recognizing real pulmonary nodule among a large group of candidates was proposed. The method consists of three steps: appropriate receptive field selection, feature extraction and a strategy for high level feature fusion and classification. The dataset consists of 888 patient's chest volume low dose computer tomography (LDCT) scans, selected from publicly available LIDC-IDRI dataset. This dataset was marked by LUNA16 challenge organizers resulting in 1186 nodules. Trivial data augmentation and dropout were applied in order to avoid overfitting. Our method achieved high competition performance metric (CPM) of 0.735 and sensitivities of 78.8% and 83.9% at 1 and 4 false positives per scan, respectively. This study is also accompanied by detailed descriptions and results overview in comparison with the state of the art solutions. |
Civility 2 . 0 : A comparative analysis of incivility in online political discussion | Online political discussion amongst citizens has often been labelled uncivil. Indeed, as online discussion allows participants to remain relatively anonymous, and by extension, unaccountable for their behaviour, citizens often engage in angry, hostile, and derogatory discussion, taking the opportunity to attack the beliefs and values of others without fear of retribution. Some commentators believe that this type of incivility, however, could soon be a thing of the past as citizens increasingly turn to online social network sites such as Facebook.com to discuss politics. Facebook requires users, when registering for an account, to do so using their real name, and encourages them to attach a photograph and other personal details to their profile. As a result, users are both identified with and accountable for the comments they make, presumably making them less likely to engage in uncivil discussion. This paper aims to test this assumption by analysing the occurrence of incivility in reader comments left in response to political news articles by the Washington Post. Specifically, it will quantitatively content analyse the comments, comparing the occurrence of incivility evident in comments left on the Washington Post website with comments left on the Washington Post’s Facebook page. Analysis suggests that, in line with the hypothesis, these online platforms differ significantly when it comes to incivility. Paper to be presented at the Elections, Public Opinion, and Parties (EPOP) Conference, September 13-15, 2013, University of Lancaster, UK. Acknowledgement: This work was supported by the Economic and Social Research Council [grant number ES/I902767/1]. |
Creating full view panoramic image mosaics and environment maps | This paper presents a novel approach to creating full view panoramic mosaics from image sequences. Unlike current panoramic stitching methods, which usually require pure horizontal camera panning, our system does not require any controlled motions or constraints on how the images are taken (as long as there is no strong motion parallax). For example, images taken from a hand-held digital camera can be stitched seamlessly into panoramic mosaics. Because we represent our image mosaics using a set of transforms, there are no singularity problems such as those existing at the top and bottom of cylindrical or spherical maps. Our algorithm is fast and robust because it directly recovers 3D rotations instead of general 8 parameter planar perspective transforms. Methods to recover camera focal length are also presented. We also present an algorithm for efficiently extracting environment maps from our image mosaics. By mapping the mosaic onto an artibrary texture-mapped polyhedron surrounding the origin, we can explore the virtual environment using standard 3D graphics viewers and hardware without requiring special-purpose players. CR |
Algorithm Selection for Classification Problems via Cluster-based Meta-features | Meta-features describe the characteristics of the datasets to facilitate algorithm selection. This paper proposes a new set of meta-features based on clustering the instances within datasets. We propose the use of a greedy clustering algorithm, and evaluate the meta-features generated based on the learning curves produced by the Random Forest algorithm. We also compared the utility of the proposed meta-features against preexisting meta-features described in the literature, and evaluated the applicability of these meta-features over a sample of UCI datasets. Our results show that these meta-features do indeed improve the performance when applied to the algorithm selection task. |
a CroSS-Country ComPariSon With Payment Diary Survey Data | We measure consumers' use of cash by harmonizing payment diary surveys from seven countries: Australia, Austria, Canada, France, Germany, the Netherlands, and the United States (conducted 2009 through 2012). Our paper finds important cross-country differences, for example, the level of cash usage differs across countries. Cash has not disappeared as a payment instrument, especially for low-value transactions. We also find that the use of cash is strongly correlated with transaction size, demographics, and point-of-sale characteristics, such as merchant card acceptance and venue. |
The relation between breakfast skipping and school performance in adolescents | Breakfast skipping is common in adolescents, but research on the effects of breakfast skipping on school performance is scarce. This current cross-sectional survey study of 605 adolescents aged 11–18 years investigated whether adolescents who habitually skip breakfast have lower endof-term grades than adolescents who eat breakfast daily. Additionally, the roles of sleep behavior, namely chronotype, and attention were explored. Results showed that breakfast skippers performed lower at school than breakfast eaters. The findings were similar for younger and older adolescents and for boys and girls. Adolescents with an evening chronotype were more likely to skip breakfast, but chronotype was unrelated to school performance. Furthermore, attention problems partially mediated the relation between breakfast skipping and school performance. This large-scale study emphasizes the importance of breakfast as a determinant for school performance. The results give reason to investigate the mechanisms underlying the relation between skipping breakfast, attention, and school performance in more detail. Proper nutrition is commonly believed to be important for school performance; it is considered to be an essential prerequisite for the potential to learn in children (Taras, 2005). In the Western world, where most school-aged children are well nourished, emphasis is placed on eating breakfast for optimal school performance. Eating breakfast might be particularly important during adolescence. Adolescents have 1Department of Educational Neuroscience, VU University Amsterdam 2Centre for Learning Sciences and Technology, Open Universiteit Nederland 3School for Mental Health and Neuroscience, Maastricht University Address correspondence to Annemarie Boschloo, Faculty of Psychology and Education, VU University Amsterdam, Van der Boechorststraat 1, 1081 BT Amsterdam, The Netherlands; e-mail: [email protected]. high nutritional needs, due to brain development processes and physical growth, while at the same time they have the highest rate of breakfast skipping among school-aged children (Hoyland, Dye, & Lawton, 2009; Rampersaud, 2009). However, not much is known about the effects of breakfast skipping on their school performance. Reviews indicate that only few studies have investigated the relationship between breakfast skipping and school performance in adolescents (Ells et al., 2008; Hoyland et al., 2009; Rampersaud, 2009; Taras, 2005). Therefore, the current study investigated the relation between habitual breakfast consumption and school performance in adolescents attending secondary school (age range 11–18 years). In addition, we explored two potentially important mechanisms underlying this relationship by investigating the roles of sleep behavior and attention. Depending on the definition of breakfast skipping, 10–30% of the adolescents (age range 11–18 years) can be classified as breakfast skippers (Rampersaud, Pereira, Girard, Adams, & Metzl, 2005). Adolescent breakfast skippers are more often girls and more often have a lower level of education (Keski-Rahkonen, Kaprio, Rissanen, Virkkunen, & Rose, 2003; Rampersaud et al., 2005; Shaw, 1998). Adolescent breakfast skippers are characterized by an unhealthy lifestyle, with behaviors such as smoking, irregular exercise, and alcohol and drug use. They make more unhealthy food choices and have a higher body mass index than breakfast eaters. Furthermore, they show more disinhibited behavior (Keski-Rahkonen et al., 2003; Rampersaud et al., 2005). Reasons adolescents give for skipping breakfast are that they are not hungry or do not have enough time (Shaw, 1998), although dieting seems to play a role as well (Rampersaud et al., 2005; Shaw, 1998). Experimental studies have investigated the relationship between breakfast skipping and cognitive functioning, which is assumed to underlie school performance. Breakfast skipping in children and adolescents appeared to affect memory and attention, especially toward the end of the morning (Ells et al., 2008; Hoyland et al., 2009; Rampersaud et al., 2005). |
Attacks on State-of-the-Art Face Recognition using Attentional Adversarial Attack Generative Network | With the broad use of face recognition, its weakness gradually emerges that it is able to be attacked. So, it is important to study how face recognition networks are subject to attacks. In this paper, we focus on a novel way to do attacks against face recognition network that misleads the network to identify someone as the target person not misclassify inconspicuously. Simultaneously, for this purpose, we introduce a specific attentional adversarial attack generative network (AGN ) to generate fake face images. For capturing the semantic information of the target person, this work adds a conditional variational autoencoder and attention modules to learn the instance-level correspondences between faces. Unlike traditional two-player GAN, this work introduces face recognition networks as the third player to participate in the competition between generator and discriminator which allows the attacker to impersonate the target person better. The generated faces which are hard to arouse the notice of onlookers can evade recognition by state-of-the-art networks and most of them are recognized as the target person. |
Gray and color image contrast enhancement by the curvelet transform | We present in this paper a new method for contrast enhancement based on the curvelet transform. The curvelet transform represents edges better than wavelets, and is therefore well-suited for multiscale edge enhancement. We compare this approach with enhancement based on the wavelet transform, and the Multiscale Retinex. In a range of examples, we use edge detection and segmentation, among other processing applications, to provide for quantitative comparative evaluation. Our findings are that curvelet based enhancement out-performs other enhancement methods on noisy images, but on noiseless or near noiseless images curvelet based enhancement is not remarkably better than wavelet based enhancement. |
Fast and Accurate Image Super Resolution by Deep CNN with Skip Connection and Network in Network | We propose a highly efficient and faster Single Image Super-Resolution (SISR) model with Deep Convolutional neural networks (Deep CNN). Deep CNN have recently shown that they have a significant reconstruction performance on single-image super-resolution. The current trend is using deeper CNN layers to improve performance. However, deep models demand larger computation resources and are not suitable for network edge devices like mobile, tablet and IoT devices. Our model achieves state-of-the-art reconstruction performance with at least 10 times lower calculation cost by Deep CNN with Residual Net, Skip Connection and Network in Network (DCSCN). A combination of Deep CNNs and Skip connection layers are used as a feature extractor for image features on both local and global areas. Parallelized 1x1 CNNs, like the one called Network in Network, are also used for image reconstruction. That structure reduces the dimensions of the previous layer’s output for faster computation with less information loss, and make it possible to process original images directly. Also we optimize the number of layers and filters of each CNN to significantly reduce the calculation cost. Thus, the proposed algorithm not only achieves stateof-the-art performance but also achieves faster and more efficient computation. Code is available at https://github.com/jiny2001/dcscn-super-resolution. |
Design and evaluation of a hardware/software FPGA-based system for fast image processing | We evaluate the performance of a hardware/software architecture designed to perform a wide range of fast image processing tasks. The system ar chitecture is based on hardware featuring a Field Programmable Gate Array (FPGA) co-processor and a h ost computer. A LabVIEW TM host application controlling a frame grabber and an industrial camer a is used to capture and exchange video data with t he hardware co-processor via a high speed USB2.0 chann el, implemented with a standard macrocell. The FPGA accelerator is based on a Altera Cyclone II ch ip and is designed as a system-on-a-programmablechip (SOPC) with the help of an embedded Nios II so ftware processor. The SOPC system integrates the CPU, external and on chip memory, the communication channel and typical image filters appropriate for the evaluation of the system performance. Measured tran sfer rates over the communication channel and processing times for the implemented hardware/softw are logic are presented for various frame sizes. A comparison with other solutions is given and a rang e of applications is also discussed. |
Improving Retrieval Performance for Verbose Queries via Axiomatic Analysis of Term Discrimination Heuristic | Number of terms in a query is a query-specific constant that is typically ignored in retrieval functions. However, previous studies have shown that the performance of retrieval models varies for different query lengths, and it usually degrades when query length increases. A possible reason for this issue can be the extraneous terms in longer queries that makes it a challenge for the retrieval models to distinguish between the key and complementary concepts of the query. As a signal to understand the importance of a term, inverse document frequency (IDF) can be used to discriminate query terms. In this paper, we propose a constraint to model the interaction between query length and IDF. Our theoretical analysis shows that current state-of-the-art retrieval models, such as BM25, do not satisfy the proposed constraint. We further analyze the BM25 model and suggest a modification to adapt BM25 so that it adheres to the new constraint. Our experiments on three TREC collections demonstrate that the proposed modification outperforms the baselines, especially for verbose queries. |
Augmented Reality-Based Indoor Navigation Using Google Glass as a Wearable Head-Mounted Display | This research comprehensively illustrates the design, implementation and evaluation of a novel marker less environment tracking technology for an augmented reality based indoor navigation application, adapted to efficiently operate on a proprietary head-mounted display. Although the display device used, Google Glass, had certain pitfalls such as short battery life, slow processing speed, and lower quality visual display but the tracking technology was able to complement these limitations by rendering a very efficient, precise, and intuitive navigation experience. The performance assessments, conducted on the basis of efficiency and accuracy, substantiated the utility of the device for everyday navigation scenarios, whereas a later conducted subjective evaluation of handheld and wearable devices also corroborated the wearable as the preferred device for indoor navigation. |
Do Chinese and English speakers think about time differently? Failure of replicating Boroditsky (2001) | English uses the horizontal spatial metaphors to express time (e.g., the good days ahead of us). Chinese also uses the vertical metaphors (e.g., 'the month above' to mean last month). Do Chinese speakers, then, think about time in a different way than English speakers? Boroditsky [Boroditsky, L. (2001). Does language shape thought? Mandarin and English speakers' conceptions of time. Cognitive Psychology, 43(1), 1-22] claimed that they do, and went on to conclude that 'language is a powerful tool in shaping habitual thought about abstract domains' (such as time). By estimating the frequency of usage, we found that Chinese speakers actually use the horizontal spatial metaphors more often than the vertical metaphors. This offered no logical ground for Boroditsky's claim. We were also unable to replicate her experiments in four different attempts. We conclude that Chinese speakers do not think about time in a different way than English speakers just because Chinese also uses the vertical spatial metaphors to express time. |
Dual band sleeve dipole antenna for WLAN applications | A dual band sleeve dipole antenna for WLAN application is presented. The antenna covers both the WLAN bands. The proposed antenna exhibits excellent omni-directional characteristics and reflection properties S11 values which are less than -10db. The dipole is constructed from two concentric hallow cylinders having different radius and different lengths. The complicated balun network is avoided by feeding the antenna at the center with the help of coaxial cable. The scattering parameters and radiation patterns are simulated using CST microwave studio. These simulation results are validated by the measured results. The antenna is best suitable for airborne and outdoor applications. |
Analysis of stairs-climbing ability for a tracked reconfigurable modular robot | Stairs-climbing ability is the crucial performance of mobile robot for urban environment mission such as urban search and rescue or urban reconnaissance. The track type mobile mechanism has been widely applied for its advantages such as high stability, easy to control, low terrain pressure, and continuous drive. Stairs-climbing is a complicated process for a tracked mobile robot under kinematics and dynamics constraints. In this paper, the stairs-climbing process has been divided into riser climbing, riser crossing, and nose line climbing. During each climbing process, robot's mobility has been analyzed for its kinematics and dynamics factor. The track velocity and acceleration's influences on riser climbing have been analyzed. And the semiempirical design method of the track grouser and the module length has been provided in riser crossing and nose line climbing correspondingly. Finally, stairs-climbing experiments have been made on the two-module robot in line type, and three-module robot in line type and in triangle type respectively. |
A Scalable Topic-Based Open Source Search Engine | Site-based or topic-specific search engines work with mixed success because of the general difficulty of the information retrieval task, and the lack of good link information to allow authorities to be identified. We are advocating an open source approach to the problem due to its scope and need for software components. We have adopted a topic-based search engine because it represents the next generation of capability. This paper outlines our scalable system for site-based or topic-specific search, and demonstrates the developing system on a small 250,000 document collection of EU and UN web pages. |
A double-blind, randomized, multicenter, Italian study of frovatriptan versus rizatriptan for the acute treatment of migraine | The objective of this study was to assess patient satisfaction with acute treatment of migraine with frovatriptan or rizatriptan by preference questionnaire. 148 subjects with a history of migraine with or without aura (IHS 2004 criteria), with at least one migraine attack per month in the preceding 6 months, were enrolled and randomized to frovatriptan 2.5 mg or rizatriptan 10 mg treating 1–3 attacks. The study had a multicenter, randomized, double-blind, cross-over design, with treatment periods lasting <3 months. At the end of the study, patients assigned preference to one of the treatments using a questionnaire with a score from 0 to 5 (primary endpoint). Secondary endpoints were pain-free and pain relief episodes at 2 h, and recurrent and sustained pain-free episodes within 48 h. 104 of the 125 patients (83%, intention-to-treat population) expressed a preference for a triptan. The average preference score was not significantly different between frovatriptan (2.9 ± 1.3) and rizatriptan (3.2 ± 1.1). The rates of pain-free (33% frovatriptan vs. 39% rizatriptan) and pain relief (55 vs. 62%) episodes at 2 h were not significantly different between the two treatments. The rate of recurrent episodes was significantly (p < 0.001) lower under frovatriptan (21 vs. 43% rizatriptan). No significant differences were observed in sustained pain-free episodes (26% frovatriptan vs. 22% rizatriptan). The number of patients with adverse events was not significantly different between rizatriptan (34) and frovatriptan (25, p = NS). The results suggest that frovatriptan has a similar efficacy to rizatriptan, but a more prolonged duration of action. |
A Data-Driven Failure Prognostics Method Based on Mixture of Gaussians Hidden Markov Models | This paper addresses a data-driven prognostics method for the estimation of the Remaining Useful Life (RUL) and the associated confidence value of bearings. The proposed method is based on the utilization of the Wavelet Packet Decomposition (WPD) technique, and the Mixture of Gaussians Hidden Markov Models (MoG-HMM). The method relies on two phases: an off-line phase, and an on-line phase. During the first phase, the raw data provided by the sensors are first processed to extract features in the form of WPD coefficients. The extracted features are then fed to dedicated learning algorithms to estimate the parameters of a corresponding MoG-HMM, which best fits the degradation phenomenon. The generated model is exploited during the second phase to continuously assess the current health state of the physical component, and to estimate its RUL value with the associated confidence. The developed method is tested on benchmark data taken from the “NASA prognostics data repository” related to several experiments of failures on bearings done under different operating conditions. Furthermore, the method is compared to traditional time-feature prognostics and simulation results are given at the end of the paper. The results of the developed prognostics method, particularly the estimation of the RUL, can help improving the availability, reliability, and security while reducing the maintenance costs. Indeed, the RUL and associated confidence value are relevant information which can be used to take appropriate maintenance and exploitation decisions. In practice, this information may help the maintainers to prepare the necessary material and human resources before the occurrence of a failure. Thus, the traditional maintenance policies involving corrective and preventive maintenance can be replaced by condition based maintenance. |
An Empirical Comparison of Alternative Models of the Short‐Term Interest Rate | We estimate and compare a variety of continuous-time models of the short-term riskless rate using the Generalized Method of Moments. We find that the most successful models in capturing the dynamics of the short-term interest rate are those that allow the volatility of interest rate changes to be highly sensitive to the level of the riskless rate. A number of well-known models perform poorly in the comparisons because of their implicit restrictions on term structure volatility. We show that these results have important implications for the use of different term structure models in valuing interest rate contingent claims and in hedging interest rate risk. THE SHORT-TERM RISKLESS interest rate is one of the most fundamental and important prices determined in financial markets. More models have been put forward to explain its behavior than for any other issue in finance. Many of the more popular models currently used by academic researchers and practitioners have been developed in a continuous-time setting, which provides a rich framework for specifying the dynamic behavior of the short-term riskless rate. A partial listing of these interest rate models includes those by Merton (1973), Brennan and Schwartz (1977, 1979, 1980), Vasicek (1977), Dothan (1978), Cox, Ingersoll, and Ross (1980, 1985), Constantinides and Ingerso11(1984), Schaefer and Schwartz (1984), Sundaresan (1984), Feldman (19891, Longstaff (1989a), Hull and White (1990), Black and Karasinski (1991), and Longstaff and Schwartz (1992). Despite a bewildering array of models, relatively little is known about how these models compare in terms of their ability to capture the actual behavior 'All authors are from the College of Business, The Ohio State University. We are grateful for the comments and suggestions of Warren Bailey, Emilio Barone, Fischer Black, Tim Bollerslev, Stephen Buser, John Campbell, Jennifer Conrad, George Constantinides, Ken Dunn, Margaret Forster, Campbell Harvey, Patric Hendershott, David Mayers, Huston McCulloch, Daniel Nelson, David Shimko, Ren6 Stulz, Stuart Turnbull, Curt Wells, and Finance Workshop participants a t the Commodity Futures Trading Commission, the University of Iowa, the Kansallis Research Foundation, The Ohio State University, Purdue University, and participants a t the 1991 Western Finance Association meetings, the 1991 European Finance Association meetings, the 1992 American Finance Association meetings, and the 1992 Federal Reserve Bank of Atlanta's Conference on Financial Market Issues. All errors are our responsibility. 1210 The Journal of Finance of the short-term riskless rate. The primary reason for this has probably been the lack of a common framework in which different models could be nested and their performance benchmarked. Without a common framework, it is difficult to evaluate relative performance in a consistent way.' The issue of how these models compare with each other is particularly important, however, since each model differs fundamentally in its implications for valuing contingent claims and hedging interest rate risk. This paper uses a simple econometric framework to compare the performance of a wide variety of well-known models in capturing the stochastic behavior of the short-term rate. Our approach exploits the fact that many term structure models-both single-factor and multifactor-imply dynamics for the short-term riskless rate r that can be nested within the following stochastic differential equation: These dynamics imply that the conditional mean and variance of changes in the short-term rate depend on the level of r. We estimate the parameters of this process in discrete time using the Generalized Method of Moments technique of Hansen (1982). As in Marsh and Rosenfeld (1983), we test the restrictions imposed by the alternative short-term interest rate models nested within equation (1). In addition, we compare the ability of each model to capture the volatility of the term structure. This property is of primary importance since the volatility of the riskless rate is a key variable governing the value of contingent claims such as interest rate options. In addition, optimal hedging strategies for risk-averse investors depend critically on the level of term structure volatility. The empirical analysis provides a number of important results. Using one-month Treasury bill yields, we find that the value of y is the most important feature differentiating interest rate models. In particular, we show that models which allow y 2 1 capture the dynamics of the short-term interest rate better than those which require y < 1. This is because the volatility of the process is highly sensitive to the level of r ; the unconstrained estimate of y is 1.50. We also show that the models differ significantly in their ability to capture the volatility of the short-term interest rate. We find no evidence of a structural shift in the interest rate process in October 1979 for the models that allow y 2 1. We show that these interest rate models differ significantly in their implications for valuing interest-rate-contingent securities. Using the estimated parameters for these models from the 1964 to 1989 sample period, we employ numerical procedures to value call options on long-term coupon bonds under 'Because of this problem, empirical work in this area has tended to focus on specific models instead of comparisons across models. See, for example, Brennan and Schwartz (19821, Brown and Dybvig (1986), Gibbons and Ramaswamy (19861, Pearson and Sun (1989) and Barone, Cuoco, and Zautzik (1991). Comparison of Models of the Short-Term Interest Rate 1211 different economic conditions. Our findings demonstrate that the range of possible call values varies significantly across the various models. The remainder of the paper is organized as follows. Section I describes the short-term interest rate models examined in the paper. Section I1 discusses the econometric approach. Section I11 describes the data. Section IV presents the empirical results from comparing the models. Section V contrasts the models' implications for valuing options on long-term bonds. Section VI summarizes the paper and makes concluding remarks. I. The Interest Rate Models The stochastic differential equation given in (1) defines a broad class of interest rate processes which includes many well-known interest rate models. These models can be obtained from (1) by simply placing the appropriate restrictions on the four parameters a , P, a , and y. In this paper, we focus on eight different specifications of the dynamics of the short-term riskless rate that have appeared in the literature. These specifications are listed below and the corresponding parameter restrictions are summarized in Table I: 1. Merton d r = ad t + a d z 2. Vasicek d r = ( a + pr )d t + a d z 3. CIR SR d r = ( a + pr )d t + ar l / 'dZ 4. Dothan d r = a r d Z 5. GBM dr = Prdt + a r d Z 6. Brennan-Schwartz d r = ( a + pr )d t + a r d Z 7. CIRVR d r = ar3/ 'dZ 8. CEV d r = Prdt + a r Y d Z Model 1is used in Merton (1973), footnote 34, to derive a model of discount bond prices. This stochastic process for the riskless rate is simply a Brownian motion with drift. Model 2 is the Ornstein-Uhlenbeck process used by Vasicek (1977) in deriving an equilibrium model of discount bond prices. This Gaussian process has been used extensively by others in valuing bond options, futures, futures options, and other types of contingent claims. Examples include Jamshidian (1989) and Gibson and Schwartz (1990). The Merton model can be nested within the Vasicek model by the parameter restriction p = 0. Both of these models imply that the conditional volatility of changes in the riskless rate is constant. Model 3 is the square root (SR) process which appears in the Cox, Ingersoll, and Ross (CIR) (1985) single-factor general-equilibrium term structure model. This model has also been used extensively in developing valuation models for interest-rate-sensitive contingent claims. Examples include the mortgagebacked security valuation model in Dunn and McConnell(l981), the discount bond option model in CIR (1985), the futures and futures option pricing models in Ramaswamy and Sundaresan (1986), the swap pricing model in The Journal o f Finance Table I Parameter Restrictions Imposed by Alternative Models of Short-Term Interest Rate Alternative models of the short-term riskless rate of interest r can be nested with appropriate parameter restrictions within the unrestricted model |
Two Structures for Compositionally Derived Events | This paper addresses the phenomenon of event composition: t he derivation of a single event description expressed in one clause from two le xical heads which could have been used in the description of independent events, eac h expressed in a distinct clause. In English, this phenomenon is well attested with re spect to sentences whose verb is found in combination with an XP describing a result no t strictly lexically entailed by this verb, as in (1). |
Intrinsic parameter calibration procedure for a (high-distortion) fish-eye lens camera with distortion model and accuracy estimation | -This paper presents a calibration procedure for a fish-eye lens (a high-distortion lens) mounted on a CCD TV camera. The method is designed to account for the differences in images acquired via a distortion-free lens camera setup and the images obtained by a fish-eye lens camera. The calibration procedure essentially defines a mapping between points in the world coordinate system and their corresponding point locations in the image plane. This step is important for applications in computer vision which involve quantitative measurements. The objective of this mapping is to estimate the internal parameters of the camera, including the effective focal length, one-pixel width on the image plane, image distortion center, and distortion coefficients. The number of parameters to be calibrated is reduced by using a calibration pattern with equally spaced dots and assuming a pin-hole model camera behavior for the image center, thus assuming negligible distortion at the image distortion center. Our method employs a non-finear transformation between points in the world coordinate system and their corresponding location on the image plane. A Lagrangian minimization method is used to determine the coefficients of the transformation. The validity and effectiveness of our calibration and distortion correction procedure are confirmed by application of this procedure on real images. Copyright © 1996 Pattern Recognition Society. Published by Elsevier Science Ltd. Camera calibration Lens distortion Intrinsic camera parameters Fish-eye lens Optimization |
Table frame line detection in low quality document images based on Hough transform | Table detection is of importance in the field of document images analysis and processing, especially table frame line detection. Although a great success has been achieved for high quality images during the past decade, table detection in low quality images still remains a challenge. To address this problem, we proposed a neoteric method to detect table frame line automatically in low quality document images. Firstly, Radon transform is adopted to detect skew of document images and then correct it. Secondly, run length smoothing algorithm (RLSA) is used to extract the lines longer than a predefined threshold. Thirdly, we locate table regions according to table features and detect frame lines of the detected tables using Hough transform method. The experimental results show that this method could obtain a better performance even in the low quality document images compared to the conventional method. |
Alternatives to the k-means algorithm that find better clusterings | We investigate here the behavior of the standard k-means clustering algorithm and several alternatives to it: the k-harmonic means algorithm due to Zhang and colleagues, fuzzy k-means, Gaussian expectation-maximization, and two new variants of k-harmonic means. Our aim is to find which aspects of these algorithms contribute to finding good clusterings, as opposed to converging to a low-quality local optimum. We describe each algorithm in a unified framework that introduces separate cluster membership and data weight functions. We then show that the algorithms do behave very differently from each other on simple low-dimensional synthetic datasets and image segmentation tasks, and that the k-harmonic means method is superior. Having a soft membership function is essential for finding high-quality clusterings, but having a non-constant data weight function is useful also. |
Commensal bacteria at the interface of host metabolism and the immune system | The mammalian gastrointestinal tract, the site of digestion and nutrient absorption, harbors trillions of beneficial commensal microbes from all three domains of life. Commensal bacteria, in particular, are key participants in the digestion of food, and are responsible for the extraction and synthesis of nutrients and other metabolites that are essential for the maintenance of mammalian health. Many of these nutrients and metabolites derived from commensal bacteria have been implicated in the development, homeostasis and function of the immune system, suggesting that commensal bacteria may influence host immunity via nutrient- and metabolite-dependent mechanisms. Here we review the current knowledge of how commensal bacteria regulate the production and bioavailability of immunomodulatory, diet-dependent nutrients and metabolites and discuss how these commensal bacteria–derived products may regulate the development and function of the mammalian immune system. |
Randomized crossover comparison of proportional assist ventilation and patient-triggered ventilation in extremely low birth weight infants with evolving chronic lung disease. | BACKGROUND
Refinement of ventilatory techniques remains a challenge given the persistence of chronic lung disease of preterm infants.
METHODS
To test the hypothesis that proportional assist ventilation (PAV) will allow to lower the ventilator pressure at equivalent fractions of inspiratory oxygen (FiO(2)) and arterial hemoglobin oxygen saturation in ventilator-dependent extremely low birth weight infants in comparison with standard patient-triggered ventilation (PTV).
DESIGN
Randomized crossover design.
SETTING
Two level-3 university perinatal centers.
PATIENTS
22 infants (mean (SD): birth weight, 705 g (215); gestational age, 25.6 weeks (2.0); age at study, 22.9 days (15.6)).
INTERVENTIONS
One 4-hour period of PAV was applied on each of 2 consecutive days and compared with epochs of standard PTV.
RESULTS
Mean airway pressure was 5.64 (SD, 0.81) cm H(2)O during PAV and 6.59 (SD, 1.26) cm H(2)O during PTV (p < 0.0001), the mean peak inspiratory pressure was 10.3 (SD, 2.48) cm H(2)O and 15.1 (SD, 3.64) cm H(2)O (p < 0.001), respectively. The FiO(2) (0.34 (0.13) vs. 0.34 (0.14)) and pulse oximetry readings were not significantly different. The incidence of arterial oxygen desaturations was not different (3.48 (3.2) vs. 3.34 (3.0) episodes/h) but desaturations lasted longer during PAV (2.60 (2.8) vs. 1.85 (2.2) min of desaturation/h, p = 0.049). PaCO(2) measured transcutaneously in a subgroup of 12 infants was similar. One infant met prespecified PAV failure criteria. No adverse events occurred during the 164 cumulative hours of PAV application.
CONCLUSIONS
PAV safely maintains gas exchange at lower mean airway pressures compared with PTV without adverse effects in this population. Backup conventional ventilation breaths must be provided to prevent apnea-related desaturations. |
Learning and Enjoyment in Serious Gaming - Contradiction or Complement? | Research has mainly neglected to examine if the possible antagonism of play/games and seriousness affects the educational potential of serious gaming. This article follows a microsociological approach and treats play and seriousness as different social frames, with each being indicated by significant symbols and containing unique social rules, adequate behavior and typical consequences of action. It is assumed that due to the specific qualities of these frames, serious frames are perceived as more credible but less entertaining than playful frames – regardless of subject matter. Two empirical studies were conducted to test these hypotheses. Results partially confirm expectations, but effects are not as strong as assumed and sometimes seem to be moderated by further variables, such as gender and attitudes. Overall, this article demonstrates that the educational potential of serious gaming depends not only on media design, but also on social context and personal variables. |
Side-channel power analysis of XTS-AES | XTS-AES is an advanced mode of AES for data protection of sector-based devices. Compared to other AES modes, it features two secret keys instead of one, and an additional tweak for each data block. These characteristics make the mode not only resistant against cryptoanalysis attacks, but also more challenging for side-channel attack. In this paper, we propose two attack methods on XTS-AES overcoming these challenges. In the first attack, we analyze side-channel leakage of the particular modular multiplication in XTS-AES mode. In the second one, we utilize the relationship between two consecutive block tweaks and propose a method to work around the masking of ciphertext by the tweak. These attacks are verified on an FPGA implementation of XTS-AES. The results show that XTS-AES is susceptible to side-channel power analysis attacks, and therefore dedicated protections are required for security of XTS-AES in storage devices. |
Evidence-based treatments for PTSD, new directions, and special challenges. | This paper provides a current review of existing evidence-based treatments for posttraumatic stress disorder (PTSD), with a description of psychopharmacologic options, prolonged exposure therapy, cognitive processing therapy, and eye movement desensitization and reprocessing, especially as they pertain to military populations. It further offers a brief summary of promising treatments with a developing evidence base, encompassing both psychotherapy and pharmacotherapy. Finally, challenges to the treatment of PTSD are summarized and future directions suggested. |
CIPA: A collaborative intrusion prevention architecture for programmable network and SDN | Coordinated intrusion, like DDoS, Worm outbreak and Botnet, is a major threat to network security nowadays and will continue to be a threat in the future. To ensure the Internet security, effective detection and mitigation for such attacks are indispensable. In this paper, we propose a novel collaborative intrusion prevention architecture, i.e. CIPA, aiming at confronting such coordinated intrusion behavior. CIPA is deployed as a virtual network of an artificial neural net over the substrate of networks. Taking advantage of the parallel and simple mathematical manipulation of neurons in a neural net, CIPA can disperse its lightweight computation power to the programmable switches of the substrate. Each programmable switch virtualizes one to several neurons. The whole neural net functions like an integrated IDS/IPS. This allows CIPA to detect distributed attacks on a global view. Meanwhile, it does not require high communication and computation overhead. It is scalable and robust. To validate CIPA, we have realized a prototype on Software-Defined Networks. We also conducted simulations and experiments. The results demonstrate that CIPA is effective. |
Exposure to Pornography and Acceptance of Rape Myths: A Research Summary Using Meta-Analysis. | A study quantitatively summarized the literature examining the association between acceptance of rape myths and exposure to pornography to address disputes in the academic community regarding the consistency of such research. The entire collection of "Psychological Abstracts" and "Sociological Abstracts" was manually searched for articles relevant to pornography. A total of 22 studies (with 3,434 subjects): (1) used a stimulus that met the definition of pornography; (2) involved the use of some type of rape myth acceptance; and (3) reported sufficient statistical information to permit an estimate of the association between exposure to pornography and acceptance of the rape myth. Results indicated that survey methodology shows almost no effect (exposure to pornography does not increase rape myth acceptance), while experimental studies show positive effect (exposure to pornography does increase rape myth acceptance). While the experimental studies demonstrate that violent pornography has more effect than nonviolent pornography, nonviolent pornography still generates a positive effect. Overall, while the results are mixed, there exists an association, for those studies using an experimental methodology, between exposure to sexually arousing material and acceptance of the rape myth. (Two tables of data are included. Contains 90 references.) (RS) *********************************************************************** Reproductions supplied by EDRS are the best that can be made from the original document. *********************************************************************** EXPOSURE TO PORNOGRAPHY AND ACCEPTANCE OF RAPE MYTHS: A RESEARCH SUMMARY USING META-ANALYSIS |
What Do Small Businesses Do | We show that most small business owners are very different from the entrepreneurs that economic models and policymakers often have in mind. Using new data that sample entrepreneurs just before they start their businesses, we show that few small businesses intend to bring a new idea to market or to enter an unserved market. Instead, most intend to provide an existing service to an existing market. Further, we find that most small businesses have little desire to grow big or to innovate in any observable way. We show that such behavior is consistent with the industry characteristics of the majority of small businesses, which are concentrated among skilled craftspeople, lawyers, real estate agents, health care providers, small shopkeepers, and restaurateurs. Lastly, we show that nonpecuniary benefits (being one’s own boss, having flexibility of hours, and the like) play a first-order role in the business formation decision. Our findings suggest that the importance of entrepreneurial talent, entrepreneurial luck, and financial frictions in explaining the firm size distribution may be overstated. We conclude by discussing the potential policy implications of our findings. e and policymakers alike have long been interested in the effects of various economic policies on business ownership. In fact, the U.S. Small Business Administration is a federal agency whose main purpose, according to its mission statement, is to help Americans “start, build, and grow businesses.” Researchers and policymakers often either explicitly or implicitly equate small business owners with entrepreneurs. Although this association could be tautological, we show in this paper that the typical small business owner is very different from the entrepreneur that economic models and policymakers have in mind. For example, economic theory usually considers entrepreneurs as individuals who innovate and render aging technologies obsolete (Schumpeter 1942), take economic 74 Brookings Papers on Economic Activity, Fall 2011 risks (Knight 1921, Kihlstrom and Laffont 1979, and Kanbur 1979), or are jacks-of-all-trades in the sense of having a broad skill set (Lazear 2005). Policymakers often consider entrepreneurs to be job creators or the engines of economic growth. In this paper we shed light on what the vast majority of small businesses actually do and, further, what they report ex ante wanting to do. Section I highlights the industrial breakdown of small businesses within the United States. By “small businesses” we primarily mean firms with between 1 and 19 employees; firms in this size range employ roughly 20 percent of the private sector workforce. However, we also define alternative classifications, such as firms with between 1 and 100 employees. We show that over two-thirds of all small businesses by our primary definition are confined to just 40 narrow industries, most of which provide a relatively standardized good or service to an existing customer base. These industries primarily include skilled craftspeople (such as plumbers, electricians, contractors, and painters), skilled professionals (such as lawyers, accountants, and architects), insurance and real estate agents, physicians, dentists, mechanics, beauticians, restaurateurs, and small shopkeepers (for example, gas station and grocery store owners). We also show that although firms within these industries are heterogeneous in size, these industries account for a disproportionate share of all small businesses. This composition of small businesses foreshadows our empirical results. In section II we study job creation and innovation at small firms, both established and new. First, using a variety of data sets, we show that most surviving small businesses do not grow by any significant margin. Rather, most start small and stay small throughout their entire life cycle. Also, most surviving small firms do not innovate along any observable margin. Very few report spending resources on research and development, getting a patent, or even obtaining copyright or trademark protection for something related to the business, including the company’s name. Furthermore, we show that between one-third and half of all new businesses report providing an existing good or service to an existing market. This is not surprising when one thinks of the most common types of small business. A new plumber or a new lawyer who opens up a practice often does so in an area where plumbers and lawyers already operate. 1. Haltiwanger, Jarmin, and Miranda (2010) show that, when one controls for firm age, there is no systematic relationship between firm size and growth. They conclude that those small firms that tend to grow fast (relative to large firms) are newly established firms. We discuss in later sections how our results add to these findings. In particular, we show that most surviving new firms also do not grow in any meaningful way. erik hurst and benjamin wild pugsley 75 Most existing research attributes differences across firms with respect to ex post performance to either differences in financing constraints (for example, Evans and Jovanovic 1989, Clementi and Hopenhayn 2006), differences in ex post productivity draws across firms (for example, Simon and Bonini 1958, Jovanovic 1982, Pakes and Ericson 1989, Hopenhayn 1992), or differences in the owners’ entrepreneurial ability (for example, Lucas 1978). In section III we use new data on the expectations of nascent small business owners to show that these stories are incomplete. When asked at the time of their business formation, most business owners report having no desire to grow big and no desire to innovate along observable dimensions. In other words, when starting their business, the typical plumber or lawyer expects the business to remain small well into the foreseeable future and does not expect to innovate by developing a new product or service or even to enter new markets with an existing product or service. If most small businesses do not want to grow and do not want to innovate, why do they start? We address this question in section IV. The same new data set that we used to explore the expectations of nascent business owners also specifically asks about motives. Over 50 percent of these new business owners cite nonpecuniary benefits—for example, “wanting flexibility over schedule” or “to be one’s own boss”—as a primary reason for starting the business. By comparison, only 34 percent report that they are starting the business to generate income, and only 41 percent indicate that they are starting a business because they want to create a new product or because they have a good business idea. (Respondents could give up to two answers.) Exploiting the panel nature of the data, we show that those small businesses that started for other than innovative reasons were less likely to grow in the ensuing years, less likely to report wanting to grow, less likely to innovate, and less likely to report wanting to innovate. Collectively, these results suggest that the first-order reasons why most small businesses form are not the innovation or growth motives embedded in most theories of entrepreneurship. Rather, the nonpecuniary benefits of small business ownership may be an important driver of why firms start and remain small. Additionally, some industries (such as insurance agencies) may have a natural scale of production at the establishment level that is quite low. In section V we discuss how our results challenge much of the existing work on entrepreneurship and small-firm dynamics. We highlight how our findings suggest that the importance of entrepreneurial talent, entrepreneurial luck, and financial frictions in explaining the firm size distribution may be overstated. In section VI we discuss the policy implications of our results. Section VII concludes. 76 Brookings Papers on Economic Activity, Fall 2011 More research into the diversity of motives and expectations among small businesses has been done in developing economies than in developed economies. Recent work by Rafael La Porta and Andrei Shleifer (2008) and a review of the literature by Abhijit Banerjee and Esther Duflo (2011) show that most small businesses in developing economies do not grow or innovate in any observable way. We discuss in section V how the qualitatively similar outcomes we observe in the United States are driven by different forces than in developing economies. Overall, our results reveal substantial skewness among small businesses within the United States, in terms of both actual and expected growth and innovative behavior. Although growth and innovation are the usual cornerstones of entrepreneurial models and the usual justifications for policy interventions to support small business, most small businesses do not want to grow or innovate. Our results suggest that it is often inappropriate for researchers to use the universe of small business (or self-employment) data to test standard theories of entrepreneurship. More specialized data sets, such as those that track small businesses seeking venture capital funding, may be more suitable for this task, because these firms have been shown to be more likely to actually grow or to innovate than other small businesses. For their part, policymakers who want to promote growth and innovation may want to consider more targeted policies than those that address the universe of small businesses. I. Industrial Composition of Small Businesses This section intends to show that most small businesses are concentrated in a small number of narrowly defined industries (industries at the four-digit level of the North American Industry Classification System, or NAICS) that mostly provide standard services to local customers. This context is important when interpreting our findings that the majority of small businesses do not intend to grow or innovate in any substantive way. 2. Two notable exceptions include Bhidé (2000) and Ardagna and Lusardi (2008). Bhidé (20 |
Exploring the Potential Effects of Organic Production on Contracting in American Agribusiness | Organic production, while still a niche market in U.S. agriculture, is growing at a rapid rate. This paper argues that organic producers, particularly those seeking certification to sell at the retail level, share many characteristics with conventional producers who opt for contracting over independence. These include yield risk, search and transaction costs, and technological changes. Depending on the rate at which federal assistance programs grow and evolve to serve organic producers, contracting may become a popular choice within the organic sector. In turn, contracting may come to cover a significantly larger share of agricultural production as the organic sector continues to grow. |
The antiresorptive effects of a single dose of zoledronate persist for two years: a randomized, placebo-controlled trial in osteopenic postmenopausal women. | CONTEXT
Annual iv administration of 5 mg zoledronate decreases fracture risk. The optimal dosing interval of 5 mg zoledronate is not known.
OBJECTIVE
Our objective was to determine the duration of antiresorptive action of a single 5-mg dose of iv zoledronate.
DESIGN, SETTING, AND PARTICIPANTS
We conducted a double-blind, randomized, placebo-controlled trial over 2 yr at an academic research center, in a volunteer sample of 50 postmenopausal women with osteopenia.
INTERVENTION
Intervention included 5 mg zoledronate.
MAIN OUTCOME MEASURES
Biochemical markers of bone turnover and bone mineral density of the lumbar spine, proximal femur, and total body.
RESULTS
Compared with placebo, zoledronate treatment decreased mean levels of each of four markers of bone turnover by at least 38% (range 38-45%) for the duration of the study (P < 0.0001 for each marker). After 2 yr, bone mineral density was higher in the zoledronate group than the placebo group by an average of 5.7% (95% confidence interval = 4.0-7.4) at the lumbar spine, 3.9% (2.2-5.7) at the proximal femur, and 1.7% (0.8-2.5) at the total body (P < 0.0001 for each skeletal site). Between-groups differences in markers of bone turnover and bone mineral density were similar at 12 and 24 months. Mild secondary hyperparathyroidism was present throughout the study in the zoledronate group.
CONCLUSION
The antiresorptive effects of a single 5-mg dose of zoledronate are sustained for at least 2 yr. The magnitudes of the effects on markers of bone turnover and bone mineral density are comparable at 12 and 24 months. Administration of zoledronate at intervals of up to 2 yr may be associated with antifracture efficacy; clinical trials to investigate this possibility are justified. |
Invariant Manifold Approach for the Stabilization of Nonholonomic Chained Systems : Application to a Mobile Robot | In this paper it is shown that a class of n-dimensional nonholonomic chained systems can be stabilized using the invariant manifold approach. First, we derive an invariant manifold for this class of systems and we show that, once on it, all the closed-loop trajectories tend to the origin under a linear smooth time-invariant state feedback. Thereafter, it is shown that this manifold can be made attractive by means of a discontinuous timeinvariant state feedback. Finally, a mobile robot is taken as an example demonstrating the effectiveness of our study. |
Biosorption and me. | Biosorption has been defined as the property of certain biomolecules (or types of biomass) to bind and concentrate selected ions or other molecules from aqueous solutions. As opposed to a much more complex phenomenon of bioaccumulation based on active metabolic transport, biosorption by dead biomass (or by some molecules and/or their active groups) is passive and based mainly on the "affinity" between the (bio-)sorbent and sorbate. A personal overview of the field and its origins is given here, focusing on R&D reasoning and know-how that is not normally published in the scientific literature. While biosorption of heavy metals has become a popular environmentally driven research topic, it represents only one particular type of a concentration-removal aspect of the sorption process. The methodology of studying biosorption is based on an interdisciplinary approach to it, whereby the phenomenon can be studied, examined and analyzed from different angles and perspectives-by chemists, (micro-)biologists as well as (process) engineers. A pragmatic science approach directs us towards the ultimate application of the phenomenon when reasonably well understood. Considering the variety of parameters affecting the biosorption performance, we have to avoid the endless empirical and, indeed, alchemistic approach to elucidating and optimizing the phenomenon-and this is where the power of computers becomes most useful. This is all still in the domain of science-or "directed curiosity". When the knowledge of biosorption is adequate, it is time to use it-applications of certain types of biosorption are on the horizon, inviting the "new technology" enterprise ventures and presenting new and quite different challenges. |
SECURITY ENHANCEMENT OF ADVANCED ENCRYPTION STANDARD (AES) USING TIME-BASED DYNAMIC KEY GENERATION | Login is the first step which is conducted every time we access to a system and only granted to those who are entitled. Login is very important and it is a part of the security of a system. Registered user or guest and administrator are two kinds of users that have the privilege and access. Nowadays, wrongdoers are always on the dark side and frequently try to gain access to the system. To provide a better security, it is important to enhance the access mechanism and to evaluate the authentication process. Message-Digest 5 (MD5) is one of the algorithms that commonly used in the login system. Although it has been so popular, but the algorithm is still vulnerable to dictionary attacks and rainbow tables. In addition to the hash function algorithm, Advanced Encryption Standard (AES) algorithm alternatively can be the best choice in the login authentication process. This study is proposed to develop a dynamic key generation on the AES algorithm using the function of time. The experimental results obtained that AES key can be generated at random based on the value of the time when a user logs in with a particular active period. In this implementation, the security authentication becomes stronger because of changes in key generating chipertext changes for each encryption process. Based on the time as a valuable benchmark, the result shown that AES encryption-decryption process is relatively fast with a time average about 0.0023 s. |
Insulin Glargine/Lixisenatide: A Review in Type 2 Diabetes | Subcutaneous insulin glargine/lixisenatide (Suliqua™) is a titratable, fixed-ratio combination of a long-acting basal insulin analogue and a glucagon-like peptide-1 (GLP-1) receptor agonist for the treatment of adult patients with inadequately controlled type 2 diabetes. Once-daily insulin glargine/lixisenatide, in combination with metformin, provided effective glycaemic control and was generally well tolerated in the 30-week, multinational, phase 3 LixiLan-O and LixiLan-L trials in insulin-naive and -experienced adult patients with inadequately controlled type 2 diabetes. Although long-term clinical experience with this fixed-ratio combination is currently lacking, given its convenient once-daily regimen and beneficial effects on glycaemic control and bodyweight loss in the absence of an increase in the incidence of hypoglycaemia, insulin glargine/lixisenatide is an emerging option for the treatment of adult patients with type 2 diabetes to improve glycaemic control when this has not been provided by metformin alone or metformin combined with another OAD or basal insulin. |
Abiotic and Biotic Stress Responses in Solanum tuberosum Group Phureja DM1-3 516 R44 as Measured through Whole Transcriptome Sequencing | This study conducted an in-depth analysis of the potato (Solanum tuberosum L.) transcriptome in response to abiotic (salinity, drought, and heat) and biotic (Phytophthora infestans, DL-b amino-n-butyric acid, and acibenzolar-s-methyl) stresses and plant hormone treatment (abscisic acid, 6-benzylaminopurine, gibberellic acid, and indole-3-acetic acid) using ribonucleic acid sequencing (RNA-seq) of the doubled monoploid S. tuberosum Group Phureja DM1-3 516 R44 clone. Extensive changes in gene expression were observed with 37% of the expressed genes being differentially expressed in at least one comparison of stress to control tissue. Stress-inducible genes known to function in stress tolerance or to be involved in the regulation of gene expression were among the highest differentially expressed. Members of the MYB, APETALA2 (AP2)/ethylene-responsive element binding factor (ERF), and NAM, ATAF1/2, and CUC2 (NAC) transcription factor families were the most represented regulatory genes. A weighted gene co-expression network analysis yielded 37 co-expression modules containing 9198 genes. Fifty percent of the genes within these co-expression modules were specific to a stress condition indicating conditionspecific responses. Cross-species comparison between potato and Arabidopsis thaliana (L.) Heynh. uncovered differentially expressed orthologs and defined evolutionary conserved genes. Collectively, the transcriptional profiling of RNA-seq data presented here provide a valuable reference for potato stress responses to environmental factors that is supported by statistically significant differences in expression changes, highly interconnected genes in co-expression networks, and evidence of evolutionary conservation. Plants growing in natural habitats are exposed to multiple environmental stresses resulting from abiotic and biotic factors. Adaptation and response to these stresses is highly complex and involve changes at the molecular, cellular, and physiological levels. Abiotic stress factors such as heat, drought, and salinity have a significant impact on cultivated potato, affecting yield, tuber quality, and market value (Wang-Pruski and Schofield, 2012). Water availability is by far the most important limiting factor in crop production. Potato plants use water relatively efficiently; however, due in part to its sparse and shallow root system, potato is considered a drought-sensitive crop (Yuan et al., 2003). Salinity is another serious threat to crop production that may increase under anticipated climate change scenarios (Yeo, 1998). Although potatoes are considered moderately salt sensitive (Katerji et al., 2000), significant variations in salt tolerance exist among Solanum tuberosum cultivars and wild Solanum species (Khrais et al., 1998). Furthermore, potato responses to salt stress are related to the responses of other abiotic stresses, as revealed by microarray analysis of leaf responses to cold and salt Published in The Plant Genome 6 doi: 10.3835/plantgenome2013.05.0014 © Crop Science Society of America 5585 Guilford Rd., Madison, WI 53711 USA An open-access publication All rights reserved. No part of this periodical may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or any information storage and retrieval system, without permission in writing from the publisher. Permission for printing and for reprinting the material contained herein has been obtained by the publisher. Dep. of Plant Biology, Michigan State Univ., East Lansing, MI 48824. Received 17 May 2013. *Corresponding author (buell@ |
Remarks on the Upper Paleozoic strata in the Menglian—Manxin area of the southern part of the Changing—Menglian Belt, western Yunnan | The opinion/model that the Changning—Menglian Belt is the remnant of the main oceanic basin of Paleotethys has given to much controversy. Since many of the data that were used to support this model are from the Menglian—Manxin area, the geological problems of this area have drawn much attention. According to the author's long-period field observation and study, it is found that many data do not support the large ocean model, and that there are several confusions and mistakes in the stratigraphic systems, on which the model was constructed. The preliminary conclusions of the author are: Late Paleozoic sediments of this area are not oceanic sediments; there are obvious sedimentary hiatuses in the succession, the stratigraphic system needs to be revised. |
Cyberbullying : Labels , Behaviours and Definition in Three European Countries | This study aims to examine students’ perception of the term used to label cyberbullying, the perception of different forms and behaviours (written, verbal, visual, exclusion and impersonation) and the perception of the criteria used for its definition (imbalance of power, intention, repetition, anonymity and publicity) in three different European countries: Italy, Spain and Germany. Seventy adolescents took part in nine focus groups, using the same interview guide across countries. Thematic analysis focused on three main themes related to: (1) the term used to label cyberbullying, (2) the different behaviours representing cyberbullying, (3) the three traditional criteria of intentionality, imbalance of power and repetition and the two new criteria of anonymity and publicity. Results showed that the best word to label cyberbullying is ‘cybermobbing’ (in Germany), ‘virtual’ or ‘cyber-bullying’ (in Italy), and ‘harassment’ or ‘harassment via Internet or mobile phone’ (in Spain). Impersonation cannot be considered wholly as cyberbullying behaviour. In order to define a cyberbullying act, adolescents need to know whether the action was done intentionally to harm the victim, the effect on the victim and the repetition of the action (this latter criterion evaluated simultaneously with the publicity). Information about the anonymity and publicity contributes to better understand the nature and the severity of the act, the potential effects on the victim and the intentionality. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.