title
stringlengths
8
300
abstract
stringlengths
0
10k
Correlates of cigarette smoking during pregnancy and its genetic and environmental overlap with nicotine dependence.
Cigarette smoking during pregnancy (CSDP) is associated with a number of negative outcomes in the offspring. Therefore, clarifying the correlates of CSDP and the extent to which CSDP is associated with nicotine dependence is an important step toward reducing its rate in the general population. Using data from 1,134 adult Australian female monozygotic and dizygotic twin pairs, we explored the associations between CSDP and sociodemographic and psychiatric correlates and between CSDP and patterns of cigarette smoking. Further, we examined the role of heritable and environmental influences on CSDP and investigated whether these latent risk factors are shared with a predisposition to nicotine dependence. Women smoking during an entire pregnancy reported heavier dependence and more unsuccessful quit attempts, compared with the community sample of mothers and with women who smoked during only part of a pregnancy. Educational attainment, weekly church attendance, spousal current smoking, and nicotine dependence also were associated with CSDP. Heritable influences explained 34% of the variation in CSDP, with the remainder related to nonshared environmental factors. A large proportion of the genetic influences on CSDP were shared with DSM-III-R nicotine dependence, with little overlap across the nonshared environmental influences. A lifetime history of difficulty with smoking cessation, in conjunction with social background and psychiatric comorbidity, especially during pregnancy, needs to be considered by treatment providers when counseling expectant mothers about the potential risks of CSDP.
Dissolved Organic Carbon in Terrestrial Ecosystems: Synthesis and a Model
The movement of dissolved organic carbon (DOC) through soils is an important process for the transport of carbon within ecosystems and the formation of soil organic matter. In some cases, DOC fluxes may also contribute to the carbon balance of terrestrial ecosystems; in most ecosystems, they are an important source of energy, carbon, and nutrient transfers from terrestrial to aquatic ecosystems. Despite their importance for terrestrial and aquatic biogeochemistry, these fluxes are rarely represented in conceptual or numerical models of terrestrial biogeochemistry. In part, this is due to the lack of a comprehensive understanding of the suite of processes that control DOC dynamics in soils. In this article, we synthesize information on the geochemical and biological factors that control DOC fluxes through soils. We focus on conceptual issues and quantitative evaluations of key process rates to present a general numerical model of DOC dynamics. We then test the sensitivity of the model to variation in the controlling parameters to highlight both the significance of DOC fluxes to terrestrial carbon processes and the key uncertainties that require additional experiments and data. Simulation model results indicate the importance of representing both root carbon inputs and soluble carbon fluxes to predict the quantity and distribution of soil carbon in soil layers. For a test case in a temperate forest, DOC contributed 25% of the total soil profile carbon, whereas roots provided the remainder. The analysis also shows that physical factors—most notably, sorption dynamics and hydrology—play the dominant role in regulating DOC losses from terrestrial ecosystems but that interactions between hydrology and microbial–DOC relationships are important in regulating the fluxes of DOC in the litter and surface soil horizons. The model also indicates that DOC fluxes to deeper soil layers can support a large fraction (up to 30%) of microbial activity below 40 cm.
Deep generative-contrastive networks for facial expression recognition
As the expressive depth of an emotional face differs with individuals or expressions, recognizing an expression using a single facial image at a moment is difficult. A relative expression of a query face compared to a reference face might alleviate this difficulty. In this paper, we propose to utilize contrastive representation that embeds a distinctive expressive factor for a discriminative purpose. The contrastive representation is calculated at the embedding layer of deep networks by comparing a given (query) image with the reference image. We attempt to utilize a generative reference image that is estimated based on the given image. Consequently, we deploy deep neural networks that embed a combination of a generative model, a contrastive model, and a discriminative model with an end-to-end training manner. In our proposed networks, we attempt to disentangle a facial expressive factor in two steps including learning of a generator network and a contrastive encoder network. We conducted extensive experiments on publicly available face expression databases (CK+, MMI, Oulu-CASIA, and in-the-wild databases) that have been widely adopted in the recent literatures. The proposed method outperforms the known state-of-the art methods in terms of the recognition accuracy.
Cost-effectiveness of screening for asymptomatic left ventricular dysfunction in childhood cancer survivors.
It has long been recognized that anthracycline chemotherapy can cause early- and late-onset congestive heart failure (CHF) and cardiomyopathy, which may progress to cardiac-specific death (1). Thus, children treated with anthracycline are monitored during and periodically after therapy. The Children's Oncology Group (COG) developed the Long-Term Follow-Up Guidelines for Survivors of Childhood, Adolescent, and Young Adult Cancers (COG-LTFU) in 2003 (2). In brief, periodic lifetime screening is recommended, based on age at treatment, cumulative anthracycline dose, and whether the heart was irradiated. These recommendations have been widely adopted. In this issue, 2 groups separately examined these recommendations and report on the cost-effectiveness (CE) of screening to detect asymptomatic left ventricular dysfunction (ALVD) and to reduce CHF risks in childhood cancer survivors who were exposed to anthracycline chemotherapy (3, 4). Cost-effectiveness analysis is a method for evaluating health outcomes and resource costs for health interventions. A base case is established that incorporates data and methods that best represent the interventions and choices under consideration. Its central function is to show the relative value of alternative interventions for improving health. In this case, it represents no screening for ALVD in childhood cancer survivors versus the recommendations of the COG-LTFU guidelines, then varies the screening interval and other variables to determine the CE of different strategies (5). Such analysis should be used as a supplement to other considerations in health care delivery, including equity, benefits many, and community compassion. The perspective used in the CE analysis is of overarching importance because it reflects the type of decisions the analysis is intended to inform. Both studies used a societal perspective, which represents the public interest rather than that of any group or the person. Analyses are intended to inform decisions at the level of broad resource allocation and may provide little guidance about optimal management of persons (5). Reading these studies side by side advances our understanding of this population and illustrates the importance and complexity of an effective screening strategy. It also serves as an example of how 2 experienced groups can apply different assumptions and variables in their respective models and still arrive at broadly similar conclusions: Although application of the COG-LTFU recommendations can reduce the CHF risk, there may be more cost-effective strategies. Wong and colleagues (3) evaluated the 12 risk profiles from the COG-LTFU guidelines, which are used to determine the interval of screening (every 1, 2, or 5 years). Instead, Yeh and colleagues (4) categorized persons as low- or high-risk based on the cumulative anthracycline dose, using a cut point of 250 mg/m2. Both groups used self-reported CHF data from the Childhood Cancer Survivor Study for CHF risk in the target population (6). Wong and colleagues supplemented this with CHF risk estimates from other studies where CHF was determined on the basis of clinical evaluation. Because data on the incidence of CHF beyond 20 to 30 years from diagnosis of childhood cancer is sparse, both groups made their best estimates (which were similar) of CHF incidence during this interval. The sensitivity of echocardiography used for the models was a key difference between the studies. Wong and colleagues used a sensitivity of 75% to 94% based on noncancer population studies comparing ejection fraction (EF) by echocardiography with radionuclide angiography. In contrast, Yeh and colleagues used a sensitivity of 25% based on a single study of adult survivors of childhood cancer comparing echocardiography with cardiac magnetic resonance imaging (cMRI) (7). Several studies have now shown 2-dimensional speckle tracking echocardiography or strain imaging to be more sensitive in detecting left ventricular dysfunction and a stronger predictor of all-cause death than conventional 2-dimensional EF (8, 9). Implementing this method as a screening test in the CE models would likely affect the results. Few data are available on the natural progression from ALVD to CHF in young cancer survivors. Thus, both groups assumed a progression based on extrapolations from patients without cancer who developed CHF secondary to hypertension, coronary disease, and myocardial infarction (10). Of note, the length of time to progress from ALVD to CHF is strongly dependent on the cause and degree of ALVD. The magnitude of ALVD EF reduction was not considered in the prediction of CHF risk in either model. Neither model incorporated clinical information available to the clinician, such as hypertension and vascular disease, which are key determinants in the development of ALVD and progression to CHF. The effectiveness of the treatments that prevent ALVD from progressing to CHF, which is so important in CE modeling, is based on the Studies of Left Ventricular Dysfunction prevention trial, a population with hypertension and coronary disease who had an EF less than 35% (10). Regular cardiac screening in cancer survivors will likely identify ALVD and lead to initiation of cardioprotective treatment earlier and at an EF greater than 35%. It is likely that more survivors would have to be treated longer to demonstrate the effectiveness of therapy. There were other differences between the studies. Wong and colleagues validated their model in additional populations. Yeh and colleagues added 2 twists to their analysis. They assessed the utility of cMRI without echocardiography in the model, assuming a low sensitivity of echocardiography compared with cMRI. They also tested a model where all survivors received lifetime preventive pharmacotherapy for CHF starting 5 years after cancer diagnosis, in lieu of any ALVD screening. This approach was associated with a greater reduction in CHF risk and lower costs but would result in many survivors receiving lifetime drug therapy without any health benefit. When key differences in approach are considered, some differences in their conclusions can be expected. Wong and colleagues found that, based on risk profile, changing from every 1-, 2-, or 5-year screening intervals to 2- to 4-, 5-, and 10-year intervals resulted in maintaining 80% of the health benefit of the COG-LTFU recommended schedule while decreasing the costs by 50%. Yeh and colleagues reported that the preferred strategy for survivors treated with a cumulative anthracycline dose of 250 mg/m2 or greater is screening echocardiography every 2 years or cMRI every 5 years; for low-risk survivors, the preferred strategy was either no echocardiography or cMRI every 10 years. Sensitivity analyses performed in the 2 studies reveal that CE is most influenced by the presumed effectiveness of the drug treatment of ALVD, the duration of the ALVD period before clinically apparent CHF, the absolute excess risk for CHF, the cost and accuracy of echocardiography, and the weight given to the effect of CHF on quality of life. Screening for ALVD had the most utility relatively early after cancer diagnosis where competing causes of illness and death were less common compared with what was seen in the aging patient. In summary, the COG-LTFU guidelines (2) are intended to assist the clinician in caring for an individual patient at the most granular level, and its recommendations are considered in these excellent analyses to be relatively cost-effective from the societal perspective. Yeh and Wong and their colleagues point out the scope of the problem and remind us that screening for cardiomyopathy can be done cost-effectively and is highly likely to improve the quality and quantity of the patient's life. They suggest to the clinician and patient many of the variables that should be explored, along with what tools to use to screen and how often to use them. The clinician and patient should be assured that screening for ALVD is a valuable undertaking and that state-of-the-art CE analyses allow for variation in their choices based on the details of the clinical presentation, patient preference, and local imaging expertise.
Effectiveness of co-trimoxazole to prevent Plasmodium falciparum malaria in HIV-positive pregnant women in sub-Saharan Africa: an open-label, randomized controlled trial.
BACKGROUND Human immunodeficiency virus (HIV) and malaria during pregnancy cause substantial perinatal mortality. As co-trimoxazole (CMX) protects children and HIV-positive adults against malaria, we compared the effectiveness of daily CMX with sulfadoxine-pyrimethamine intermittent preventive treatment (IPT-SP) on malaria risk in HIV-positive pregnant women in a Plasmodium falciparum-endemic African area. METHODS  From January 2009 to April 2011, we included in a randomized noninferiority trial all HIV type 1-infected pregnant women (≤28 weeks' gestation, CD4 count ≥200 cells/µL, hemoglobin level ≥7 g/L) in 19 health centers in Togo. Women were randomly assigned to daily 800 mg/160 mg CMX, or IPT-SP. The primary outcome was the proportion of malaria-free pregnancies. Other outcomes included malaria incidence, parasitemia, placental malaria, anemia, and infants' birth weight. RESULTS Of 264 women randomly assigned to the CMX or IPT-SP group, 126 of 132 and 124 of 132, respectively, were included in the analysis. There were 33 confirmed cases of clinical malaria among 31 women in the CMX group, and 19 among 19 women in the IPT-SP group. Ninety-five of 126 (75.4%) women in the CMX group and 105 of 124 (84.7%) in the IPT-SP group remained malaria-free during their pregnancy (difference, 9.3%; 95% confidence interval [CI], -.53 to 19.1, not meeting the predefined noninferiority criterion). The incidence rate in intention-to-treat analysis was 108.8 malaria episodes per 100 person-years in CMX (95% CI, 105.4-112.2) and 90.1 in IPT-SP (95% CI, 86.8-93.4) (not significant). Prevalence of parasitemia was 16.7% in the CMX group vs 28% in the IPT-SP group (P = .02). Histology revealed 20.3% placental malaria in the CMX group vs. 24.6% in the IPT-SP group (not significant). Grade 3-4 anemia was more frequent in the CMX group (10% vs 4%; P = .008). No pregnant women died. Median birth weight was similar. CONCLUSIONS  Daily CMX was not noninferior to IPT-SP for preventing maternal malaria but safe and at least similar regarding parasitemia or placental malaria and birth outcomes. Clinical Trials Registration  ISRCTN98835811.
Evaluation of endometrial thickness with transvaginal ultrasonography and histopathology in premenopausal women with abnormal vaginal bleeding
This study was undertaken to investigate cut-off value of the endometrial thickness by transvaginal ultrasonography (TvUSG), and to detect the accuracy of preoperative Pipelle biopsy in premenopausal women with abnormal vaginal bleeding. This study was included 144 premenopausal women with abnormal bleeding. Their endometrial thickness was measured by TvUSG and then Pipelle endometrial biopsy was performed. Preoperative histopathologic findings of 57 women who were operated were compared with final histolopathologic examination. Of the 144 women, 113 (78.4%) had normal and 31 (21.6%) had an abnormal endometrium. The abnormal endometrium was composed of 11.8% hyperplasia (simple + atypical complex), 4.2% endometrial polyp, and 5.5% adenocarcinoma. An optimal sensitivity and specificity (83.6 and 56.4%, respectively) and negative predictive value with 95.6% for detection of abnormal endometrium were obtained with an endometrial thickness of 8 mm. The accuracy rate of preoperative Pipelle biopsy was 94.7% in a total of 57 women. An endometrial thickness >8 mm is more likely than that of 8 mm or less to be indicated with endometrial biopsy in premenopausal uterine bleeding. Pipelle endometrial biopsy is an accurate diagnostic procedure for the detection of high-grade endometrial lesions in premenopausal women.
Development and validation of a set of six adaptable prognosis prediction (SAP) models based on time-series real-world big data analysis for patients with cancer receiving chemotherapy: A multicenter case crossover study
BACKGROUND We aimed to develop an adaptable prognosis prediction model that could be applied at any time point during the treatment course for patients with cancer receiving chemotherapy, by applying time-series real-world big data. METHODS Between April 2004 and September 2014, 4,997 patients with cancer who had received systemic chemotherapy were registered in a prospective cohort database at the Kyoto University Hospital. Of these, 2,693 patients with a death record were eligible for inclusion and divided into training (n = 1,341) and test (n = 1,352) cohorts. In total, 3,471,521 laboratory data at 115,738 time points, representing 40 laboratory items [e.g., white blood cell counts and albumin (Alb) levels] that were monitored for 1 year before the death event were applied for constructing prognosis prediction models. All possible prediction models comprising three different items from 40 laboratory items (40C3 = 9,880) were generated in the training cohort, and the model selection was performed in the test cohort. The fitness of the selected models was externally validated in the validation cohort from three independent settings. RESULTS A prognosis prediction model utilizing Alb, lactate dehydrogenase, and neutrophils was selected based on a strong ability to predict death events within 1-6 months and a set of six prediction models corresponding to 1,2, 3, 4, 5, and 6 months was developed. The area under the curve (AUC) ranged from 0.852 for the 1 month model to 0.713 for the 6 month model. External validation supported the performance of these models. CONCLUSION By applying time-series real-world big data, we successfully developed a set of six adaptable prognosis prediction models for patients with cancer receiving chemotherapy.
Balloon aortic valvuloplasty in children: a multicenter study in Japan.
A questionnaire was used to survey the experience of 8 Japanese institutions with percutaneous transluminal aortic valvuloplasty (PTAV) in children. Among 99 procedures reported in 88 patients, sufficient data for analysis was obtained from 76 procedures in 72 patients. In those 76 procedures the pressure gradient decreased significantly from 68+/-25 (20-140) to 33+/-22 (0-100) mmHg (p<0.01), whereas aortic regurgitation (AR) increased at least one grade in 26 cases (34%). None of the parameters analyzed in this study were predictors of an increase in AR. The reduction in pressure gradient was judged as good in 44 of the 76 procedures (58%). A larger ring diameter, larger balloon diameter and larger ratio balloon diameter/the normal predicted diameter of the aortic valve ring significantly contributed to an effective reduction of pressure gradient. Follow up data (mean interval, 4 years) was available for 26 of 39 clinically effective procedures. AR progressed at least 1 grade in 11 (42%), and the pressure gradient re-developed to more than 50mmHg in 2 cases (8%). In Japan, PTAV has been accepted as a useful procedure for valvular aortic stenosis in children, but progressive AR or re-development of the pressure gradient is not uncommon even after clinically effective PTAV.
Monitoring ship noise to assess the impact of coastal developments on marine mammals.
The potential impacts of underwater noise on marine mammals are widely recognised, but uncertainty over variability in baseline noise levels often constrains efforts to manage these impacts. This paper characterises natural and anthropogenic contributors to underwater noise at two sites in the Moray Firth Special Area of Conservation, an important marine mammal habitat that may be exposed to increased shipping activity from proposed offshore energy developments. We aimed to establish a pre-development baseline, and to develop ship noise monitoring methods using Automatic Identification System (AIS) and time-lapse video to record trends in noise levels and shipping activity. Our results detail the noise levels currently experienced by a locally protected bottlenose dolphin population, explore the relationship between broadband sound exposure levels and the indicators proposed in response to the EU Marine Strategy Framework Directive, and provide a ship noise assessment toolkit which can be applied in other coastal marine environments.
An Examination of the Factors Contributing to Adoption Decisions among Late-Diffused Technology Products
According to diffusion theory, consumer beliefs or perceptions of innovation attributes, along with external socioeconomic and media exposures, influence the decision to adopt an innovation. To examine the relative influence of beliefs, attitudes, and external variables, the current study synthesizes perspectives from the Technology Adoption Model (TAM) and diffusion theory, and presents an integrated model of consumer adoption. The article reports the results of a survey investigating the measurement model in predicting potential adoption by late adopters of cellular phones. The model confirms the importance of attitudes towards potential adoption. Also significant are the influence of media ownership on perceptions of advantage, observability, and compatibility of the innovation. Media use and change agent contacts significantly influence perceptions of complexity of the innovation. Age, income and occupation were the sociodemographic variables that indirectly influenced adoption intention. new media & society Copyright © 2003 SAGE Publications London, Thousand Oaks, CA and New Delhi Vol5(4):547–572 [1461–4448(200312)5:4,547–572;038273] ........................................................................................................................................................................................................................................................
Multi-scale Recognition with DAG-CNNs
We explore multi-scale convolutional neural nets (CNNs) for image classification. Contemporary approaches extract features from a single output layer. By extracting features from multiple layers, one can simultaneously reason about high, mid, and low-level features during classification. The resulting multi-scale architecture can itself be seen as a feed-forward model that is structured as a directed acyclic graph (DAG-CNNs). We use DAG-CNNs to learn a set of multi-scale features that can be effectively shared between coarse and fine-grained classification tasks. While fine-tuning such models helps performance, we show that even "off-the-self" multi-scale features perform quite well. We present extensive analysis and demonstrate state-of-the-art classification performance on three standard scene benchmarks (SUN397, MIT67, and Scene15). In terms of the heavily benchmarked MIT67 and Scene15 datasets, our results reduce the lowest previously-reported error by 23.9% and 9.5%, respectively.
Approximations by OBDDs and the variable ordering problem
Ordered binary decision diagrams (OBDDs) and their variants are motivated by the need to represent Boolean functions in applications. Research concerning these applications leads also to problems and results interesting from a theoretical point of view. In this paper, methods from communication complexity and information theory are combined to prove that the direct storage access function and the inner product function have the following property. They have linear π-OBDD size for some variable ordering π and, for most variable orderings π′ all functions which approximate them on considerably more than half of the inputs, need exponential π′-OBDD size. These results have implications for the use of OBDDs in experiments with genetic programming.
Evaluation of Segmentation Quality via Adaptive Composition of Reference Segmentations
Evaluating image segmentation quality is a critical step for generating desirable segmented output and comparing performance of algorithms, among others. However, automatic evaluation of segmented results is inherently challenging since image segmentation is an ill-posed problem. This paper presents a framework to evaluate segmentation quality using multiple labeled segmentations which are considered as references. For a segmentation to be evaluated, we adaptively compose a reference segmentation using multiple labeled segmentations, which locally matches the input segments while preserving structural consistency. The quality of a given segmentation is then measured by its distance to the composed reference. A new dataset of 200 images, where each one has 6 to 15 labeled segmentations, is developed for performance evaluation of image segmentation. Furthermore, to quantitatively compare the proposed segmentation evaluation algorithm with the state-of-the-art methods, a benchmark segmentation evaluation dataset is proposed. Extensive experiments are carried out to validate the proposed segmentation evaluation framework.
500+ Times Faster than Deep Learning: (A Case Study Exploring Faster Methods for Text Mining StackOverflow)
Deep learning methods are useful for high-dimensional data and are becoming widely used in many areas of software engineering. Deep learners utilizes extensive computational power and can take a long time to train- making it difficult to widely validate and repeat and improve their results. Further, they are not the best solution in all domains. For example, recent results show that for finding related Stack Overflow posts, a tuned SVM performs similarly to a deep learner, but is significantly faster to train. This paper extends that recent result by clustering the dataset, then tuning every learners within each cluster. This approach is over 500 times faster than deep learning (and over 900 times faster if we use all the cores on a standard laptop computer). Significantly, this faster approach generates classifiers nearly as good (within 2% F1 Score) as the much slower deep learning method. Hence we recommend this faster methods since it is much easier to reproduce and utilizes far fewer CPU resources. More generally, we recommend that before researchers release research results, that they compare their supposedly sophisticated methods against simpler alternatives (e.g applying simpler learners to build local models).
ICT for Sustainability: An Emerging Research Field
This introductory chapter provides definitions of sustainability, sustainable development, decoupling, and related terms; gives an overview of existing interdisciplinary research fields related to ICT for Sustainability, including Environmental Informatics, Computational Sustainability, Sustainable HCI, and Green ICT; introduces a conceptual framework to structure the effects of ICT on sustainability; and provides an overview of this book.
Broadband Bent Triangular Omnidirectional Antenna for RF Energy Harvesting
In this letter, a broadband bent triangular omnidirectional antenna is presented for RF energy harvesting. The antenna has a bandwidth for VSWR ≤ 2 from 850 MHz to 1.94 GHz. The antenna is designed to receive both horizontal and vertical polarized waves and has a stable radiation pattern over the entire bandwidth. Antenna has also been optimized for energy harvesting application and it is designed for 100 Ω input impedance to provide a passive voltage amplification and impedance matching to the rectifier. A peak efficiency of 60% and 17% is obtained for a load of 500 Ω at 980 and 1800 MHz, respectively. At a cell site while harvesting all bands simultaneously a voltage of 3.76 V for open circuit and 1.38 V across a load of 4.3 k Ω is obtained at a distance of 25 m using an array of two elements of the rectenna.
Malware classification based on call graph clustering
Each day, anti-virus companies receive tens of thousands samples of potentially harmful executables. Many of the malicious samples are variations of previously encountered malware, created by their authors to evade pattern-based detection. Dealing with these large amounts of data requires robust, automatic detection approaches. This paper studies malware classification based on call graph clustering. By representing malware samples as call graphs, it is possible to abstract certain variations away, enabling the detection of structural similarities between samples. The ability to cluster similar samples together will make more generic detection techniques possible, thereby targeting the commonalities of the samples within a cluster. To compare call graphs mutually, we compute pairwise graph similarity scores via graph matchings which approximately minimize the graph edit distance. Next, to facilitate the discovery of similar malware samples, we employ several clustering algorithms, including k-medoids and Density-Based Spatial Clustering of Applications with Noise (DBSCAN). Clustering experiments are conducted on a collection of real malware samples, and the results are evaluated against manual classifications provided by human malware analysts. Experiments show that it is indeed possible to accurately detect malware families via call graph clustering. We anticipate that in the future, call graphs can be used to analyse the emergence of new malware families, and ultimately to automate implementation of generic detection schemes.
OVARIAN HUGE SEROUS CYSTADENOMA IN ADOLESCENT GIRL: A CASE REPORT
Ovarian cysts are an extremely common gynecological problem in adolescent. Majority of ovarian cysts are benign with few cases being malignant. Ovarian serous cystadenoma are rare in children. A 14-year-old presented with abdominal pain and severe abdominal distention. She underwent laparotomy and after surgical removal, the mass was found to be ovarian serous cystadenoma on histology. In conclusions, germ cell tumors the most important causes for the giant ovarian masses in children. Epithelial tumors should not be forgotten in the differential diagnosis. Keyword: Adolescent; Ovarian Cysts/diagnosis*; Cystadenoma, Serous/surgery; Ovarian Neoplasms/surgery; Ovarian cystadenoma
Retreatment With Varenicline for Smoking Cessation in Smokers Who Have Previously Taken Varenicline: A Randomized, Placebo-Controlled Trial
The efficacy and safety of retreatment with varenicline in smokers attempting to quit were evaluated in this randomized, double-blind, placebo-controlled, multicenter trial (Australia, Belgium, Canada, the Czech Republic, France, Germany, the United Kingdom, and the United States). Participants were generally healthy adult smokers (≥ 10 cigarettes/day) with ≥ 1 prior quit attempt (≥ 2 weeks) using varenicline and no quit attempts in ≤ 3 months; they were randomly assigned (1:1) to 12 weeks' varenicline (n = 251) or placebo (n = 247) treatment, with individual counseling, plus 40 weeks' nontreatment follow-up. The primary efficacy end point was the carbon monoxide-confirmed (≤ 10 ppm) continuous abstinence rate for weeks 9-12, which was 45.0% (varenicline; n = 249) vs. 11.8% (placebo; n = 245; odds ratio: 7.08; 95% confidence interval: 4.34, 11.55; P < 0.0001). Common varenicline group adverse events were nausea, abnormal dreams, and headache, with no reported suicidal behavior. Varenicline is efficacious and well tolerated in smokers who have previously taken it. Abstinence rates are comparable with rates reported for varenicline-naive smokers.
Fiducial registration error and target registration error are uncorrelated
Image-guidance systems based on fiducial registration typically display some measure of registration accuracy based on the goodness of fit of the fiducials. A common measure is fiducial registration error (FRE), which equals the root-meansquare error in fiducial alignment between image space and physical space. It is natural for the surgeon to regard the displayed estimate of error as an indication of the accuracy of the system’s ability to provide guidance to surgical targets for a given case. Thus, when the estimate is smaller than usual, it may be assumed that the target registration error (TRE) is likely to be smaller than usual. We show that this assumption, while intuitively convincing, is in fact wrong. We show it in two ways. First, we prove to first order that for a given system with a given level of normally distributed fiducial localization error, all measures of goodness of fit are statistically independent of TRE, and therefore FRE and TRE are uncorrelated. Second, we demonstrate by means of computer simulations that they are uncorrelated for the exact problem as well. Since TRE is the true measure of registration accuracy of importance to the success of the surgery, our results show that no estimate of accuracy for a given patient that is based on goodness of fiducial fit for that patient gives any information whatever about true registration accuracy for that patient. Therefore surgeons should stop using such measures as indicators of registration quality for the patients on whom they are about to operate.
Human Gait Recognition Using Patch Distribution Feature and Locality-Constrained Group Sparse Representation
In this paper, we propose a new patch distribution feature (PDF) (i.e., referred to as Gabor-PDF) for human gait recognition. We represent each gait energy image (GEI) as a set of local augmented Gabor features, which concatenate the Gabor features extracted from different scales and different orientations together with the X-Y coordinates. We learn a global Gaussian mixture model (GMM) (i.e., referred to as the universal background model) with the local augmented Gabor features from all the gallery GEIs; then, each gallery or probe GEI is further expressed as the normalized parameters of an image-specific GMM adapted from the global GMM. Observing that one video is naturally represented as a group of GEIs, we also propose a new classification method called locality-constrained group sparse representation (LGSR) to classify each probe video by minimizing the weighted l1, 2 mixed-norm-regularized reconstruction error with respect to the gallery videos. In contrast to the standard group sparse representation method that is a special case of LGSR, the group sparsity and local smooth sparsity constraints are both enforced in LGSR. Our comprehensive experiments on the benchmark USF HumanID database demonstrate the effectiveness of the newly proposed feature Gabor-PDF and the new classification method LGSR for human gait recognition. Moreover, LGSR using the new feature Gabor-PDF achieves the best average Rank-1 and Rank-5 recognition rates on this database among all gait recognition algorithms proposed to date.
Development of "Souryu-IV" and "Souryu-V: " Serially connected crawler vehicles for in-rubble searching operations
The authors have developed Souryu-I, Souryu-II and Souryu-III, connected crawler vehicles that can travel in rubble. These machines were developed for the purpose of finding survivors trapped inside collapsed buildings. However, when conducting experiments in post-disaster environments with Souryu-III, mechanical and control limitations have been identified. This led the authors to develop novel crawler units using crawler tracks strengthened with metal, and develop two improved models, called Souryu-IV composed of three double-sided crawler bodies, a joint driving unit, a blade-spring joint mechanism, and cameras and Souryu-V composed of mono-tread-crawler bodies, elastic-rod-joint mechanisms, and cameras . The authors then conducted basic motion experiments and teleoperated control experiments on off-road fields with Souryu-IV and Souryu-V. Their high performance in experiments of urban rescue operations was confirmed. However, several problems were identified during the driving experiments, and • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •
현대 패션에 나타난 프리다 칼로 패션 스타일에 관한 연구
Post-Modern Multiculturalism is speading in many ways and other cultural interest has led to the Latin American culture, which has steadily appeared as the main fashion concept of the international collection. Frida Kahlo(1907-1954), a representative of Latin American culture and a Mexican female surrealistic painter, who gives us some inspiration for contemporary fashion design. The purpose of this study is to have a correct understanding of and research on Latin American culture and to expand the expression of fashion design. In the process, this article examines the costume of Mexico belong to the Indio culture which had been regarded as the Other for a long time and understands Frida Kahlo`s fashion style: Tehuanas costume and her masculine style. The Tehuanas costume is a traditional style of the city, Tehuantepec, located in the southeast of Mexico, which is characterized by colorful floral patterns and a long skirt. Three fashion concepts were derived from her fashion style: tradition & modern, love & farewell, and masculine & feminine. The scope of this study investigates the Haute Couture and the Pret-?-porter women`s wear collections and photos from 1998 to 2010 in Europe, USA and South America. From classifying the collected photos into each concept, three styles such as ethnic, romantic and androgynous style were derived. These fashion styles were analyzed through the formative elements of dress, color, silhouette, pattern, material and clothing construction in order to figure out how costumes of other cultures influenced on contemporary fashion. As a design inspiration, Frida Kahlo extends her influence into innerwear, accessories and hairstyles, as well as women`s wear. These inspirations emerge steadily from the past and far into the future as an expression of a fashion design concept.
Exercise augments the acute anabolic effects of intradialytic parenteral nutrition in chronic hemodialysis patients.
Decreased dietary protein intake and hemodialysis (HD)-associated protein catabolism are among several factors that predispose chronic hemodialysis (CHD) patients to uremic malnutrition and associated muscle wasting. Intradialytic parenteral nutrition (IDPN) acutely reverses the net negative whole body and forearm muscle protein balances observed during the HD procedure. Exercise has been shown to improve muscle protein homeostasis, especially if performed with adequately available intramuscular amino acids. We hypothesized that exercise performance would provide additive anabolic effects to the beneficial effects of IDPN. We studied six CHD patients at two separate HD sessions: 1) IDPN administration only and 2) IDPN + exercise. Patients were studied 2 h before, during, and 2 h after an HD session by use of a primed constant infusion of l-[1-(13)C]leucine and l-[ring-(2)H(5)] phenylalanine. Exercise combined with IDPN promoted additive twofold increases in forearm muscle essential amino acid uptake (455 +/- 105 vs. 229 +/- 38 nmol.100 ml(-1).min(-1), P < 0.05) and net muscle protein accretion (125 +/- 37 vs. 56 +/- 30 microg.100 ml(-1).min(-1), P < 0.05) during HD compared with IDPN alone. Measurements of whole body protein homeostasis and energy expenditure were not altered by exercise treatment. In conclusion, exercise in the presence of adequate nutritional supplementation has potential as a therapeutic intervention to blunt the loss of muscle mass in CHD patients.
An Information-Maximization Approach to Blind Separation and Blind Deconvolution
We derive a new self-organizing learning algorithm that maximizes the information transferred in a network of nonlinear units. The algorithm does not assume any knowledge of the input distributions, and is defined here for the zero-noise limit. Under these conditions, information maximization has extra properties not found in the linear case (Linsker 1989). The nonlinearities in the transfer function are able to pick up higher-order moments of the input distributions and perform something akin to true redundancy reduction between units in the output representation. This enables the network to separate statistically independent components in the inputs: a higher-order generalization of principal components analysis. We apply the network to the source separation (or cocktail party) problem, successfully separating unknown mixtures of up to 10 speakers. We also show that a variant on the network architecture is able to perform blind deconvolution (cancellation of unknown echoes and reverberation in a speech signal). Finally, we derive dependencies of information transfer on time delays. We suggest that information maximization provides a unifying framework for problems in "blind" signal processing.
Multitask Learning
Multitask Learning is an approach to inductive transfer that improves generalization by using the domain information contained in the training signals of related tasks as an inductive bias. It does this by learning tasks in parallel while using a shared representation; what is learned for each task can help other tasks be learned better. This paper reviews prior work on MTL, presents new evidence that MTL in backprop nets discovers task relatedness without the need of supervisory signals, and presents new results for MTL with k-nearest neighbor and kernel regression. In this paper we demonstrate multitask learning in three domains. We explain how multitask learning works, and show that there are many opportunities for multitask learning in real domains. We present an algorithm and results for multitask learning with case-based methods like k-nearest neighbor and kernel regression, and sketch an algorithm for multitask learning in decision trees. Because multitask learning works, can be applied to many different kinds of domains, and can be used with different learning algorithms, we conjecture there will be many opportunities for its use on real-world problems.
Design and testing of a low-cost robotic wheelchair prototype
Many people who are mobility impaired are, for a variety of reasons, incapable of using an ordinary wheelchair. In some instances, a power wheelchair also cannot be used, usually because of the difficulty the person has in controlling it (often due to additional disabilities). This paper describes two lowcost robotic wheelchair prototypes that assist the operator of the chair in avoiding obstacles, going to pre-designated places, and maneuvering through doorways and other narrow or crowded areas. These systems can be interfaced to a variety of input devices, and can give the operator as much or as little moment by moment control of the chair as they wish. This paper describes both systems, the evolution from one system to another, and the lessons learned.
Design and implementation of low-profile contactless battery charger using planar printed circuit board windings as energy transfer device
This paper paper presents the practical details involved in the design and implementation of a contactless battery charger that employs a pair of neighboring printed circuit board (PCB) windings as a contactless energy transfer device. A prototype contactless battery charger developed for application with cellular phones is used as an example to address the design considerations for the PCB windings and energy transfer circuit, plus demonstrates the performance of the contactless charger adapted to a practical application system.
Resin composite--state of the art.
OBJECTIVES The objective is to review the current state of the art of dental composite materials. METHODS An outline of the most important aspects of dental composites was created, and a subsequent literature search for articles related to their formulation, properties and clinical considerations was conducted using PubMed followed by hand searching citations from relevant articles. RESULTS The current state of the art of dental composites includes a wide variety of materials with a broad range of mechanical properties, handling characteristics, and esthetic possibilities. This highly competitive market continues to evolve, with the major emphasis in the past being to produce materials with adequate strength, and high wear resistance and polishability retention. The more recent research and development efforts have addressed the issue of polymerization shrinkage and its accompanying stress, which may have a deleterious effect on the composite/tooth interfacial bond. Current efforts are focused on the delivery of materials with potentially therapeutic benefits and self-adhesive properties, the latter leading to truly simplified placement in the mouth. SIGNIFICANCE There is no one ideal material available to the clinician, but the commercial materials that comprise the current armamentarium are of high quality and when used appropriately, have proven to deliver excellent clinical outcomes of adequate longevity.
Headscarves: A comparison of public thought and public policy in Germany and the Netherlands
Abstract This article focuses on public debates and public policy on the Islamic headscarf in the Netherlands and Germany. In the Netherlands the Islamic headscarf meets with an accommodating policy reaction, while in Germany some eight federal states have introduced legislation to ban the headscarf. This difference is explained, so I argue, by national differences in citizenship traditions. While the Netherlands represents a multicultural model, Germany used to be the paradigmatic example of an ethno‐cultural model of citizenship. Yet, the reaction of the German left to the headscarf, while often non‐accommodating, is very differently inspired by German history than that of the right. A commonality is that in both countries the issue is framed as a conflict between public neutrality and religious freedom, not gender equality. An effect of the focus in the debate on neutrality is that it obscures the agency of Islamic women and the gender dynamics in Islamic communities.
Text Summarization Techniques: A Brief Survey
In recent years, there has been a explosion in the amount of text data from a variety of sources. This volume of text is an invaluable source of information and knowledgewhich needs to be effectively summarized to be useful. In this review, the main approaches to automatic text summarization are described. We review the different processes for summarization and describe the effectiveness and shortcomings of the different methods.
From high heels to weed attics: a syntactic investigation of chick lit and literature
Stylometric analysis of prose is typically limited to classification tasks such as authorship attribution. Since the models used are typically black boxes, they give little insight into the stylistic differences they detect. In this paper, we characterize two prose genres syntactically: chick lit (humorous novels on the challenges of being a modern-day urban female) and high literature. First, we develop a top-down computational method based on existing literary-linguistic theory. Using an off-the-shelf parser we obtain syntactic structures for a Dutch corpus of novels and measure the distribution of sentence types in chick-lit and literary novels. The results show that literature contains more complex (subordinating) sentences than chick lit. Secondly, a bottom-up analysis is made of specific morphological and syntactic features in both genres, based on the parser’s output. This shows that the two genres can be distinguished along certain features. Our results indicate that detailed insight into stylistic differences can be obtained by combining computational linguistic analysis with literary theory.
Cognitive neural prosthetics
The cognitive neural prosthetic (CNP) is a very versatile method for assisting paralyzed patients and patients with amputations. The CNP records the cognitive state of the subject, rather than signals strictly related to motor execution or sensation. We review a number of high-level cortical signals and their application for CNPs, including intention, motor imagery, decision making, forward estimation, executive function, attention, learning, and multi-effector movement planning. CNPs are defined by the cognitive function they extract, not the cortical region from which the signals are recorded. However, some cortical areas may be better than others for particular applications. Signals can also be extracted in parallel from multiple cortical areas using multiple implants, which in many circumstances can increase the range of applications of CNPs. The CNP approach relies on scientific understanding of the neural processes involved in cognition, and many of the decoding algorithms it uses also have parallels to underlying neural circuit functions. 169 A nn u. R ev . P sy ch ol . 2 01 0. 61 :1 69 -1 90 . D ow nl oa de d fr om a rj ou rn al s. an nu al re vi ew s. or g by C al if or ni a In st itu te o f T ec hn ol og y on 0 1/ 03 /1 0. F or p er so na l u se o nl y. ANRV398-PS61-07 ARI 17 November 2009 19:51 Cognitive neural prosthetics (CNPs): instruments that consist of an array of electrodes, a decoding algorithm, and an external device controlled by the processed cognitive signal Decoding algorithms: computer algorithms that interpret neural signals for the purposes of understanding their function or for providing control signals to machines
The relationship between job satisfaction, burnout, and turnover intention among physicians from urban state-owned medical institutions in Hubei, China: a cross-sectional study
BACKGROUND Throughout China, a growing number of physicians are leaving or intending to depart from their organizations owing to job dissatisfaction. Little information is available about the role of occupational burnout in this association. We set out to analyze the relationship between job satisfaction, burnout, and turnover intention, and further to determine whether occupational burnout can serve as a mediator among Chinese physicians from urban state-owned medical institutions. METHODS A cross-sectional survey was carried out in March 2010 in Hubei Province, central China. The questionnaires assessed sociodemographic characteristics, job satisfaction, burnout, and turnover intention. The job satisfaction and occupational burnout instruments were obtained by modifying the Chinese Physicians' Job Satisfaction Questionnaire (CPJSQ) and the Chinese Maslach Burnout Inventory (CMBI), respectively. Such statistical methods as one-way ANOVA, Pearson correlation, GLM-univariate and structural equation modeling were used. RESULTS Of the 1600 physicians surveyed, 1451 provided valid responses. The respondents had medium scores (3.18 +/-0.73) on turnover intention, in which there was significant difference among the groups from three urban areas with different development levels. Turnover intention, which significantly and negatively related to all job-satisfaction subscales, positively related to each subscale of burnout syndrome. Work environment satisfaction (b = -0.074, p < 0.01), job rewards satisfaction (b = -0.073, p < 0.01), organizational management satisfaction (b = -0.146, p < 0.01), and emotional exhaustion (b = 0.135, p < 0.01) were identified as significant direct predictors of the turnover intention of physicians, with 41.2% of the variance explained unitedly, under the control of sociodemographic variables, among which gender, age, and years of service were always significant. However, job-itself satisfaction no longer became significant, with the estimated parameter on job rewards satisfaction smaller after burnout syndrome variables were included. As congregated latent concepts, job satisfaction had both significant direct effects (gamma21 = -0.32, p < 0.01) and indirect effects (gamma11 × beta21 = -0.13, p < 0.01) through occupational burnout (62% explained) as a mediator on turnover intention (47% explained). CONCLUSIONS Our study reveals that several, but not all dimensions of both job satisfaction and burnout syndrome are relevant factors affecting physicians' turnover intention, and there may be partial mediation effects of occupational burnout, mainly through emotional exhaustion, within the impact of job satisfaction on turnover intention. This suggests that enhancements in job satisfaction can be expected to reduce physicians' intentions to quit by the intermediary role of burnout as well as the direct path. It is hoped that these findings will offer some clues for health-sector managers to keep their physician resource motivated and stable.
Spatiotemporal dynamics of functional clusters of neurons in the mouse motor cortex during a voluntary movement.
Functional clustering of neurons is frequently observed in the motor cortex. However, it is unknown if, when, and how fine-scale (<100 μm) functional clusters form relative to voluntary forelimb movements. In addition, the implications of clustering remain unclear. To address these issues, we conducted two-photon calcium imaging of mouse layer 2/3 motor cortex during a self-initiated lever-pull task. In the imaging session after 8-9 days of training, head-restrained mice had to pull a lever for ∼600 ms to receive a water drop, and then had to wait for >3 s to pull it again. We found two types of task-related cells in the mice: cells whose peak activities occurred during lever pulls (pull cells) and cells whose peak activities occurred after the end of lever pulls. The activity of pull cells was strongly associated with lever-pull duration. In ∼40% of imaged fields, functional clusterings were temporally detected during the lever pulls. Spatially, there were ∼70-μm-scale clusters that consisted of more than four pull cells in ∼50% of the fields. Ensemble and individual activities of pull cells within the cluster more accurately predicted lever movement trajectories than activities of pull cells outside the cluster. This was likely because clustered pull cells were more often active in the individual trials than pull cells outside the cluster. This higher fidelity of activity was related to higher trial-to-trial correlations of activities of pairs within the cluster. We propose that strong recurrent network clusters may represent the execution of voluntary movements.
Cannabis use in palliative care - an examination of the evidence and the implications for nurses.
AIM AND OBJECTIVE Examine the pharmaceutical qualities of cannabis including a historical overview of cannabis use. Discuss the use of cannabis as a clinical intervention for people experiencing palliative care, including those with life-threatening chronic illness such as multiple sclerosis and motor neurone disease [amyotrophic lateral sclerosis] in the UK. BACKGROUND The non-medicinal use of cannabis has been well documented in the media. There is a growing scientific literature on the benefits of cannabis in symptom management in cancer care. Service users, nurses and carers need to be aware of the implications for care and treatment if cannabis is being used medicinally. DESIGN A comprehensive literature review. METHOD Literature searches were made of databases from 1996 using the term cannabis and the combination terms of cannabis and palliative care; symptom management; cancer; oncology; chronic illness; motor neurone disease/amyotrophic lateral sclerosis; and multiple sclerosis. Internet material provided for service users searching for information about the medicinal use of cannabis was also examined. RESULTS The literature on the use of cannabis in health care repeatedly refers to changes for users that may be equated with improvement in quality of life as an outcome of its use. This has led to increased use of cannabis by these service users. However, the cannabis used is usually obtained illegally and can have consequences for those who choose to use it for its therapeutic value and for nurses who are providing care. RELEVANCE TO CLINICAL PRACTICE Questions and dilemmas are raised concerning the role of the nurse when caring and supporting a person making therapeutic use of cannabis.
Improved MPPT Algorithms for Rapidly Changing Environmental Conditions
The first part of this paper intends to give an overview of the maximum power point tracking methods for photovoltaic (PV) inverters presently reported in the literature. The most well-known and popular methods, like the perturb and observe (P&O), the incremental conductance (INC) and the constant voltage (CV), are presented. These methods, especially the P&O, have been treated by many works, which aim to overcome their shortcomings, either by optimizing the methods, or by combining them. In the second part of the paper an improvement for the P&O and INC method is proposed, which prevents these algorithms to get confused during rapidly changing irradiation conditions, and it considerably increases the efficiency of the MPPT
The impact of data replication on job scheduling performance in the Data Grid
In the Data Grid environment, the primary goal of data replication is to shorten the data access time experienced by the job and consequently reduce the job turnaround time. After introducing a Data Grid architecture that supports efficient data access for the Grid job, the dynamic data replication algorithms are put forward. Combined with different Grid scheduling heuristics, the performances of the data replication algorithms are evaluated with various simulations. The simulation results demonstrate f shortest lacement data e ists, d the ten-
Social Interactions and Well-Being: The Surprising Power of Weak Ties.
Although we interact with a wide network of people on a daily basis, the social psychology literature has primarily focused on interactions with close friends and family. The present research tested whether subjective well-being is related not only to interactions with these strong ties but also to interactions with weak social ties (i.e., acquaintances). In Study 1, students experienced greater happiness and greater feelings of belonging on days when they interacted with more classmates than usual. Broadening the scope in Studies 2A and 2B to include all daily interactions (with both strong and weak ties), we again found that weak ties are related to social and emotional well-being. The current results highlight the power of weak ties, suggesting that even social interactions with the more peripheral members of our social networks contribute to our well-being.
Odometry-based online extrinsic sensor calibration
In recent years vehicles have been equipped with more and more sensors for environment perception. Among these sensors are cameras, RADAR, single-layer and multi-layer LiDAR. One key challenge for the fusion of these sensors is sensor calibration. In this paper we present a novel extrinsic calibration algorithm based on sensor odometry. Given the time-synchronized delta poses of two sensors our technique recursively estimates the relative pose between these sensors. The method is generic in that it can be used to estimate complete 6DOF poses, given the sensors provide a 6DOF odometry, as well as 3DOF poses (planar offset and yaw angle) for sensors providing a 3DOF odometry, like a single-beam LiDAR. We show that the proposed method is robust against motion degeneracy and present results on both simulated and real world data using an inertial navigation system (INS) and a stereo camera system.
The biequivalence of locally cartesian closed categories and Martin-Löf type theories
Seely's paper "Locally cartesian closed categories and type theory" contains a well-known result in categorical type theory: that the category of locally cartesian closed categories is equivalent to the category of Martin-L\"of type theories with Pi-types, Sigma-types and extensional identity types. However, Seely's proof relies on the problematic assumption that substitution in types can be interpreted by pullbacks. Here we prove a corrected version of Seely's theorem: that the B\'enabou-Hofmann interpretation of Martin-L\"of type theory in locally cartesian closed categories yields a biequivalence of 2-categories. To facilitate the technical development we employ categories with families as a substitute for syntactic Martin-L\"of type theories. As a second result we prove that if we remove Pi-types the resulting categories with families are biequivalent to left exact categories.
Asthma-predictive-index, bronchial-challenge, sputum eosinophils in acutely wheezing preschoolers.
BACKGROUND Most preschoolers with viral wheezing exacerbations are not atopic. AIM To test in a prospective controlled trial whether wheezing preschoolers presenting to the ED are different from the above in three different domains defining asthma: the atopic characteristics based on stringent asthma predictive index (S-API), the characteristics of bronchial hyper-responsiveness (BHR), and airway inflammation. METHODS The S-API was prospectively collected in 41 preschoolers (age 31.9 ± 17.4 months, range; 1-6 years) presenting to the ED with acute wheezing and compared to healthy preschoolers (n = 109) from our community (community control group). Thirty out of the 41 recruited preschoolers performed two sets of bronchial challenge tests (BCT)-(methacholine and adenosine) within 3 weeks and following 3 months of the acute event and compared to 30 consecutive ambulatory preschoolers, who performed BCT for diagnostic workup in our laboratory (ambulatory control group). On presentation, induced sputum (IS) was obtained from 22 of the 41 children. OUTCOMES Primary: S-API, secondary: BCTs characteristics and percent eosinophils in IS. RESULTS Significantly more wheezing preschoolers were S-API positive compared with the community control group: 20/41 (48.7%) versus 15/109 (13.7%, P < 0.001). All methacholine-BCTs-30/30 (100%) were positive compared with 13/14 (92.8%) in the ambulatory control group (P = 0.32). However, 23/27 (85.2%) were adenosine-BCT positive versus 3/17 (17.5%) in the ambulatory control group (P < 0.001). Diagnostic IS success rate was 18/22 (81.8%). Unexpectedly, 9/18 (50.0%) showed eosinophilia in the IS. CONCLUSIONS Wheezing preschoolers presenting to the ED is a unique population with significantly higher rate of positive S-API and adenosine-BCT compared with controls and frequently (50%) express eosinophilic airway inflammation.
Needle: Leveraging Program Analysis to Analyze and Extract Accelerators from Whole Programs
Technology constraints have increasingly led to the adoption of specialized coprocessors, i.e. hardware accelerators. The first challenge that computer architects encounter is identifying "what to specialize in the program". We demonstrate that this requires precise enumeration of program paths based on dynamic program behavior. We hypothesize that path-based [4] accelerator offloading leads to good coverage of dynamic instructions and improve energy efficiency. Unfortunately, hot paths across programs demonstrate diverse control flow behavior. Accelerators (typically based on dataflow execution), often lack an energy-efficient, complexity effective, and high performance (eg. branch prediction) support for control flow. We have developed NEEDLE, an LLVM based compiler framework that leverages dynamic profile information to identify, merge, and offload acceleratable paths from whole applications. NEEDLE derives insight into what code coverage (and consequently energy reduction) an accelerator can achieve. We also develop a novel program abstraction for offload calledBraid, that merges common code regions across different paths to improve coverage of the accelerator while trading off the increase in dataflow size. This enables coarse grained offloading, reducing interaction with the host CPU core. To prepare the Braids and paths for acceleration, NEEDLE generates software frames. Software frames enable energy efficient speculative execution on accelerators. They are accelerator microarchitecture independent support speculative execution including memory operations. NEEDLE is automated and has been used to analyze 225K paths across 29 workloads. It filtered and ranked 154K paths for acceleration across unmodified SPEC, PARSEC and PERFECT workload suites. We target NEEDLE's offload regions toward a CGRA and demonstrate 34% performance and 20% energy improvement.
A Haskell compiler for signal transforms
Building a reusable, auto-tuning code generator from scratch is a challenging problem, requiring many careful design choices. We describe HSpiral, a Haskell compiler for signal transforms that builds on the foundational work of Spiral. Our design leverages many Haskell language features to ensure that our framework is reusable, flexible, and efficient. As well as describing the design of our system, we show how to extend it to support new classes of transforms, including the number-theoretic transform and a variant of the split-radix algorithm that results in reduced operation counts. We also show how to incorporate rewrite rules into our system to reproduce results from previous literature on code generation for the fast Fourier transform. Although the Spiral project demonstrated significant advances in automatic code generation, it has not been widely used by other researchers. HSpiral is freely available under an MIT-style license, and we are actively working to turn it into a tool to further both our own research goals and to serve as a foundation for other research groups' work in developing new implementations of signal transform algorithms.
In a Different Place: Pilgrimage, Gender and Politics at a Greek Island Shrine
This textured account of a modern pilgrimage combines ethnographic detail, theory, and personal reflection. Visited by thousands of pilgrims yearly, the Church of the Madonna of the Annunciation on the Aegean island of Tinos is a site where different interests - sacred and secular, local and national, personal and official - come together. Exploring the shrine and its surrounding town, Jill Dubisch shares her insights into the intersection of social, religious and political life in Greece. Along the way she develops the idea of pilgrimage - journeying away from home in search of the miraculous - as a metaphor for anthropological fieldwork. Dubisch examines in detail the process of pilgrimage itself, its relationship to Orthodox belief and practice, the motivations and behaviour of pilgrims, the relationship between religion and Greek national identity, and the gendered nature of religious roles. Seeking to evoke rather than simply describe, her book presents readers with a sense of the emotion, colour, and power of pilgrimage at this Greek island shrine.
Swarm Control of UAVs for Cooperative Hunting with DDDAS
Swarm control is a problem of increasing importance with technological advancements. Recently, governments have begun employing UAVs for reconnaissance, including swarms of drones searching for evasive targets. An agent-based simulation for dynamic cooperative cleaning is augmented with additional behaviors and implemented into a Dynamic Data-Driven Application System (DDDAS) framework for dynamic swarm control.
Experimental gastric mucosal injury: laboratory models reveal mechanisms of pathogenesis and new therapeutic strategies.
Gastric ulcer is a multifaceted, pluricausal illness. Knowledge of the pathophysiology of gastric ulcer disease remains incomplete. Current pharmacological management of gastric ulceration is directed primarily at the reduction or neutralization of gastric acid secretion despite evidence that patients with this disease often exhibit normal gastric secretory activity. Attempts have been made to prevent or reduce gastric mucosal injury by cytoprotective agents without diminishing gastric acidity. We review several alternate explanations for the cause of gastric ulcers by examining various experimental models of gastric mucosal damage, including ethanol-, stress-, and nonsteroidal antiinflammatory drug-induced gastric lesions. We also discuss possible new strategies for the treatment of ulcer disease, particularly novel pharmacological targets arising from research conducted with these models. Growing realization that factors other than gastric secretion contribute significantly to the development of gastric ulcer disease prompts the conclusion that these same factors represent viable treatment alternatives.
Microbiology and the species problem
This paper examines the species problem in microbiology and its implications for the species problem more generally. Given the different meanings of ‘species’ in microbiology, the use of ‘species’ in biology is more multifarious and problematic than commonly recognized. So much so, that recent work in microbial systematics casts doubt on the existence of a prokaryote species category in nature. It also casts doubt on the existence of a general species category for all of life (one that includes both prokaryotes and eukaryotes). Prokaryote biology also undermines recent attempts to save the species category, such as the suggestion that species are metapopulation lineages and the idea that ‘species’ is a family resemblance concept.
A Reconfigurable Differential CMOS RF Energy Scavenger With 60% Peak Efficiency and -21 dBm Sensitivity
A differential RF-DC CMOS converter for RF energy scavenging based on a reconfigurable voltage rectifier topology is presented. The converter efficiency and sensitivity are optimized thanks to the proposed reconfigurable architecture. Prototypes, realized in 130 nm, provide a regulated output voltage of ~2 V when working at 868 MHz, with a -21 dBm sensitivity. The circuit efficiency peaks at 60%, remaining above the 40% for a 18 dB input power range.
Hardware Implementation for the Echo Canceller System based Subband Technique using TMS320C6713 DSP Kit
The acoustic echo cancellation system is very important in the communication applications that are used these days; in view of this importance we have implemented this system practically by using DSP TMS320C6713 Starter Kit (DSK). The acoustic echo cancellation system was implemented based on 8 subbands techniques using Least Mean Square (LMS) algorithm and Normalized Least Mean Square (NLMS) algorithm. The system was evaluated by measuring the performance according to Echo Return Loss Enhancement (ERLE) factor and Mean Square Error (MSE) factor. Keywords—Acoustic echo canceller; Least Mean Square (LMS); Normalized Least Mean Square (NLMS); TMS320C6713; 8 subbands adaptive filter
Parallel Coordinates: A Tool for Visualizing Multi-dimensional Geometry
A methodology for visualizing analytic and synthetic geometry in RN is presented. It is based on a system of parallel coordinates which induces a non-projective mapping between N-Dimensional and 2-Dimensional sets. Hypersurfaces are represented by their planar images which have some geometrical properties analogous to the properties of the hypersurface that they represent. A point ← → line duality when N = 2 generalizes to lines and hyperplanes enabling the representation of polyhedra in RN. The representation of a class of convex and non-convex hypersurfaces is discussed together with an algorithm for constructing and displaying any interior point. The display shows some local properties of the hypersurface and provides information on the point's proximity to the boundary. Applications to Air Traffic Control, Robotics, Computer Vision, Computational Geometry, Statistics, Instrumentation and other areas are discussed.
Improvement of a patient's circadian rhythm sleep disorders by aripiprazole was associated with stabilization of his bipolar illness.
Splitting of the behavioural activity phase has been found in nocturnal rodents with suprachiasmatic nucleus (SCN) coupling disorder. A similar phenomenon was observed in the sleep phase in the diurnal human discussed here, suggesting that there are so-called evening and morning oscillators in the SCN of humans. The present case suffered from bipolar disorder refractory to various treatments, and various circadian rhythm sleep disorders, such as delayed sleep phase, polyphasic sleep, separation of the sleep bout resembling splitting and circabidian rhythm (48 h), were found during prolonged depressive episodes with hypersomnia. Separation of sleep into evening and morning components and delayed sleep-offset (24.69-h cycle) developed when lowering and stopping the dose of aripiprazole (APZ). However, resumption of APZ improved these symptoms in 2 weeks, accompanied by improvement in the patient's depressive state. Administration of APZ may improve various circadian rhythm sleep disorders, as well as improve and prevent manic-depressive episodes, via augmentation of coupling in the SCN network.
Algebraic reconstruction techniques (ART) for three-dimensional electron microscopy and x-ray photography.
We give a new method for direct reconstruction of three-dimensional objects from a few electron micrographs taken at angles which need not exceed a range of 60 degrees. The method works for totally asymmetric objects, and requires little computer time or storage. It is also applicable to X-ray photography, and may greatly reduce the exposure compared to current methods of body-section radiography.
Toward Black-Box Detection of Logic Flaws in Web Applications
Web applications play a very important role in many critical areas, including online banking, health care, and personal communication. This, combined with the limited security training of many web developers, makes web applications one of the most common targets for attackers. In the past, researchers have proposed a large number of whiteand black-box techniques to test web applications for the presence of several classes of vulnerabilities. However, traditional approaches focus mostly on the detection of input validation flaws, such as SQL injection and cross-site scripting. Unfortunately, logic vulnerabilities specific to particular applications remain outside the scope of most of the existing tools and still need to be discovered by manual inspection. In this paper we propose a novel black-box technique to detect logic vulnerabilities in web applications. Our approach is based on the automatic identification of a number of behavioral patterns starting from few network traces in which users interact with a certain application. Based on the extracted model, we then generate targeted test cases following a number of common attack scenarios. We applied our prototype to seven real world E-commerce web applications, discovering ten very severe and previouslyunknown logic vulnerabilities.
Classifying and Characterizing Query Intent
Understanding the intent underlying user queries may help personalize search results and improve user satisfaction. In this paper, we develop a methodology for using ad clickthrough logs, query specific information, and the content of search engine result pages to study characteristics of query intents, specially commercial intent. The findings of our study suggest that ad clickthrough features, query features, and the content of search engine result pages are together effective in detecting query intent. We also study the effect of query type and the number of displayed ads on the average clickthrough rate. As a practical application of our work, we show that modeling query intent can improve the accuracy of predicting ad clickthrough for previously unseen queries.
Indian Currency Denomination Recognition for Visually Impaired
There is a population of over 37 million people in the world that is visually impaired (VI). With the continuous advancement in technology, it&apos;s now possible to build viable technological solutions to help such disadvantaged people in their day to day life. In this paper we propose a solution that shall help visually impaired people identify Indian Currency notes which shall then simplify monetary transactions for them. The method proposed includes image processing techniques which have been elaborated in this paper.
Using the h-index to rank influential information scientists
We apply a new bibliometric measure, the h-index (Hirsch, 2005), to the literature of information science. Faculty rankings based on raw citation counts are compared with those based on h-counts. There is a strong positive correlation between the two sets of rankings. We show how the h-index can be used to express the broad impact of a scholar’s research output over time in more nuanced fashion than straight citation counts.
Accessibility Evaluation of Classroom Captions
Real-time captioning enables deaf and hard of hearing (DHH) people to follow classroom lectures and other aural speech by converting it into visual text with less than a five second delay. Keeping the delay short allows end-users to follow and participate in conversations. This article focuses on the fundamental problem that makes real-time captioning difficult: sequential keyboard typing is much slower than speaking. We first surveyed the audio characteristics of 240 one-hour-long captioned lectures on YouTube, such as speed and duration of speaking bursts. We then analyzed how these characteristics impact caption generation and readability, considering specifically our human-powered collaborative captioning approach. We note that most of these characteristics are also present in more general domains. For our caption comparison evaluation, we transcribed a classroom lecture in real-time using all three captioning approaches. We recruited 48 participants (24 DHH) to watch these classroom transcripts in an eye-tracking laboratory. We presented these captions in a randomized, balanced order. We show that both hearing and DHH participants preferred and followed collaborative captions better than those generated by automatic speech recognition (ASR) or professionals due to the more consistent flow of the resulting captions. These results show the potential to reliably capture speech even during sudden bursts of speed, as well as for generating “enhanced” captions, unlike other human-powered captioning approaches.
Optic Disk Detection in Fundus Image Based on Structured Learning
Automated optic disk (OD) detection plays an important role in developing a computer aided system for eye diseases. In this paper, we propose an algorithm for the OD detection based on structured learning. A classifier model is trained based on structured learning. Then, we use the model to achieve the edge map of OD. Thresholding is performed on the edge map, thus a binary image of the OD is obtained. Finally, circle Hough transform is carried out to approximate the boundary of OD by a circle. The proposed algorithm has been evaluated on three public datasets and obtained promising results. The results (an area overlap and Dices coefficients of 0.8605 and 0.9181, respectively, an accuracy of 0.9777, and a true positive and false positive fraction of 0.9183 and 0.0102) show that the proposed method is very competitive with the state-of-the-art methods and is a reliable tool for the segmentation of OD.
Relationship Banking : What Do We Know ?
This paper briefly reviews the contemporary literature on relationship banking. We start out with a discussion of the raison d’ˆ etre of banks in the context of the financial intermediation literature. From there we discuss how relationship banking fits into the core economic services provided by banks and point at its costs and benefits. This leads to an examination of the interrelationship between the competitive environment and relationship banking as well as a discussion of the empirical evidence. Journal of Economic LiteratureClassification Numbers: G20, G21, L10. C © 2000 Academic Press
Combined Utilisation of Rapid Assessment Procedures for Loiasis (RAPLOA) and Onchocerciasis (REA) in Rain forest Villages of Cameroon
BACKGROUND: Individuals with high microfilarial loads of Loa loa are at increased risk of neurologic serious adverse (SAE) events following ivermectin treatment against onchocerciasis. RAPLOA (Rapid Assessment Procedure for loiasis), a newly developed rapid assessment procedure for loiasis that relates the prevalence of key clinical manifestation of loiasis (history of eye worm) to the level of endemicity of the infection (prevalence of high intensity), is a very useful tool to identify areas at potential risk of L. loa post ivermectin treatment encephalopathy. In a perspective of treatment decision making in areas of co-endemicity of loiasis/onchocerciasis, it would be advantageous (both in time and cost savings) for national onchocerciasis control programmes to use RAPLOA and the Rapid epidemiologic assessment for onchocerciasis (REA), in combination in given surveys. Since each of the two rapid assessment tools have their own specificities, the workability of combining the two methods needed to be tested. METHODS: We worked in 10 communities of a forest area presumed co-endemic for loiasis and onchocerciasis in the North-West Province of Cameroon where the mass-treatment with ivermectin had not been carried out. A four-step approach was used and comprised: (i) generating data on the prevalence and intensity of loiasis and onchocerciasis in an area where such information is scarce; (ii) testing the relationship between the L. loa microfilaraemia prevalence and the RAPLOA prevalence, (iii) testing the relationship between the O. volvulus microfiladermia prevalence and the REA prevalence, (iv) testing the workability of combining RAPLOA/REA by study teams in which a single individual can perform the interview for RAPLOA and the nodule palpation for REA. RESULTS: The microfilaraemia prevalence of loiasis in communities ranged from 3.6% to 14.3%. 6 (0.61%) individuals had L. loa microfilarial loads above 8000 mf/ml but none of them attained 30,000 mf/ml, the threshold value above which the risk of developing neurologic SAE after ivermectin treatment is very high. None of the communities surveyed had RAPLOA prevalence above 40%. All the communities had microfiladermia prevalence above 60%. The microfiladermia results could be confirmed by the rapid epidemiologic method (nodule palpation), with all the 10 communities having REA prevalence above 20%. For the first time, this study has demonstrated that the two rapid assessment procedures for loiasis and onchocerciasis can be carried out simultaneously by a survey team, in which a single individual can administer the questionnaire for RAPLOA and perform the nodule palpation for REA. CONCLUSION: This study has: (i) Revealed that the Momo valley of the North West province of Cameroon is hyperendemic for onchocerciasis, but is of lower level of endemicity for L. loa. (ii) Confirmed the previous relationships established between RAPLOA and the L. loa microfilaraemia prevalence in one hand and between the REA and the O. volvulus microfiladermia prevalence in another hand (iii) Shown that RAPLOA and REA could be used simultaneously for the evaluation of loiasis and onchocerciasis endemicity in areas targeted by the African Programme for onchocerciasis Control for community-directed treatment with ivermectin (CDTI).
Radiomics: Images Are More than Pictures, They Are Data
In the past decade, the field of medical image analysis has grown exponentially, with an increased number of pattern recognition tools and an increase in data set sizes. These advances have facilitated the development of processes for high-throughput extraction of quantitative features that result in the conversion of images into mineable data and the subsequent analysis of these data for decision support; this practice is termed radiomics. This is in contrast to the traditional practice of treating medical images as pictures intended solely for visual interpretation. Radiomic data contain first-, second-, and higher-order statistics. These data are combined with other patient data and are mined with sophisticated bioinformatics tools to develop models that may potentially improve diagnostic, prognostic, and predictive accuracy. Because radiomics analyses are intended to be conducted with standard of care images, it is conceivable that conversion of digital images to mineable data will eventually become routine practice. This report describes the process of radiomics, its challenges, and its potential power to facilitate better clinical decision making, particularly in the care of patients with cancer.
The need of high resolution μ-X-ray CT in dendrochronology and in wood identification
X-ray computed tomography is a widely used method for nondestructive visualization of the interior of different samples - also of wooden material. Different to usual applications very high resolution is needed to use such CT images in dendrochronology and to evaluate wood species. In dendrochronology big samples (up to 50 cm) are necessary to scan. The needed resolution is - depending on the species - about 20 mum. In wood identification usually very small samples have to be scanned, but wood anatomical characters of less than 1 mum in width have to be visualized. This paper deals with four examples of X-ray CT scanned images to be used for dendrochronology and wood identification.
What you look at is what you get: gaze-based user interfaces
Envisioning, designing, and implementing the user interface require a comprehensive understanding of interaction technologies. In this forum we scout trends and discuss new technologies with the potential to influence interaction design. --- Albrecht Schmidt, Editor
Experimental Assessment of Private Information Disclosure in LTE Mobile Networks
Open source software running on SDR (Software Defined Radio) devices now allow building a full-fledged mobile network at low cost. These novel tools open up for exciting possibilities to analyse and verify by experiments the behaviour of existing and emerging mobile networks in new lab environments, for instance at universities. We use SDR equipment and open source software to analyse the feasibility of disclosing private information that is sent over the LTE access network. We verify by experiments that subscriber identity information can be obtained both passively, by listening on the radio link, and actively, by running considerable low detectable rogue base stations to impersonate the commercial network. Moreover, we implement a downgrade attack (to non-LTE networks) with minimal changes to the open source software.
Using Machine Learning to Detect Cyberbullying
Cyber bullying is the use of technology as a medium to bully someone. Although it has been an issue for many years, the recognition of its impact on young people has recently increased. Social networking sites provide a fertile medium for bullies, and teens and young adults who use these sites are vulnerable to attacks. Through machine learning, we can detect language patterns used by bullies and their victims, and develop rules to automatically detect cyber bullying content. The data we used for our project was collected from the website Formspring.me, a question-and-answer formatted website that contains a high percentage of bullying content. The data was labeled using a web service, Amazon's Mechanical Turk. We used the labeled data, in conjunction with machine learning techniques provided by the Weka tool kit, to train a computer to recognize bullying content. Both a C4.5 decision tree learner and an instance-based learner were able to identify the true positives with 78.5% accuracy.
Nurse/physician communication through a sensemaking lens: shifting the paradigm to improve patient safety.
Physician-nurse communication has been identified as one of the main obstacles to progress in patient safety. Breakdowns in communication between physicians and nurses often result in errors, many of which are preventable. Recent research into nurse/physician communication has borrowed heavily from team literature, tending to study communication as one behavior in a larger cluster of behaviors. The multicluster approach to team research has not provided enough analysis of and attention to communication alone. Research into communication specifically is needed to understand its crucial role in teamwork and safety. A critique of the research literature on nurse/physician communication published since 1992 revealed 3 dominant themes: settings and context, consensus building, and conflict resolution. A fourth implicit theme, the temporal nature of communication, emerged as well. These themes were used to frame a discussion on sensemaking: an iterative process arising from dialogue when 2 or more people share their unique perspectives. As a theoretical model, sensemaking may offer an alternative lens through which to view the phenomenon of nurse/physician communication and advance our understanding of how nurse/physician communication can promote patient safety. Sensemaking may represent a paradigm shift with the potential to affect 2 spheres of influence: clinical practice and health care outcomes. Sensemaking may also hold promise as an intervention because through sensemaking consensus may be built and errors possibly prevented. Engaging in sensemaking may overcome communication barriers without realigning power bases, incorporate contextual influences without drawing attention away from communicators, and inform actions arising from communication.
CompNet: Complementary Segmentation Network for Brain MRI Extraction
Brain extraction is a fundamental step for most brain imaging studies. In this paper, we investigate the problem of skull stripping and propose complementary segmentation networks (CompNets) to accurately extract the brain from T1-weighted MRI scans, for both normal and pathological brain images. The proposed networks are designed in the framework of encoder-decoder networks and have two pathways to learn features from both the brain tissue and its complementary part located outside of the brain. The complementary pathway extracts the features in the non-brain region and leads to a robust solution to brain extraction from MRIs with pathologies, which do not exist in our training dataset. We demonstrate the effectiveness of our networks by evaluating them on the OASIS dataset, resulting in the state of the art performance under the two-fold cross-validation setting. Moreover, the robustness of our networks is verified by testing on images with introduced pathologies and by showing its invariance to unseen brain pathologies. In addition, our complementary network design is general and can be extended to address other image segmentation problems with better generalization.
Artificial neural networks - theory and applications
DOWNLOAD http://bit.ly/1OslRBc Artificial neural networks: theory and applications This comprehensive tutorial on artifical neural networks covers all the important neural network architectures as well as the most recent theory-e.g., pattern recognition, statistical theory, and other mathematical prerequisites. A broad range of applications is provided for each of the architectures. Artificial neural networks for intelligent manufacturing , Cihan H. Dagli, 1994, Technology & Engineering, 469 pages. This book introduces the newly emerging technology of artificial neural networks and demonstrates its use in intelligent manufacturing systems.. Presents some of the most promising current research in the design and training of artificial neural networks (ANNs) with applications in speech and vision, as reported by the. Provides an introduction to the use of neural networks in mechanical engineering applications. This book presents models like Hopfield, Bi-directional Associative Memory, fuzzy. The recent interest in artificial neural networks has motivated the publication of numerous books, including selections of research papers and textbooks presenting the most.
Interferon alfa for chronic hepatitis B infection: increased efficacy of prolonged treatment. The European Concerted Action on Viral Hepatitis (EUROHEP).
Interferon alfa (IFN-alpha) is the primary treatment for chronic hepatitis B. The standard duration of IFN-alpha therapy is considered 16 weeks; however, the optimal treatment length is still poorly defined. We evaluated the efficacy and acceptability of prolonged IFN-alpha treatment in patients with chronic hepatitis B. To investigate whether treatment prolongation could enhance the rate of hepatitis B e antigen (HBeAg) seroconversion, we conducted a prospective, controlled, multicenter trial in which all patients were treated with a standard regimen of 10 million units IFN-alpha 3 times per week over 16 weeks. Patients who were still HBeAg-positive after 16 weeks of therapy were randomized to prolongation of the identical regimen up to 32 weeks (prolonged therapy) or discontinuation of treatment (standard therapy). Among the 162 patients who entered the study, 27 (17%) were HBeAg-negative after the first 16 weeks of treatment, and 118 were randomized to standard or prolonged therapy. After randomization, a response (HBeAg seroconversion and sustained hepatitis B virus [HBV]-DNA negativity) was observed in 7 of the 57 (12%) patients assigned to standard therapy versus 17 of the 61 (28%) patients assigned to prolonged therapy (P =.04). A low level of viral replication after 16 weeks of treatment, as indicated by serum HBV-DNA values under 10 pg/mL, was found to be the only independent predictor of response (52% vs. 0%; P <.001) during prolonged therapy. The prolonged IFN-alpha schedule was well tolerated in the large majority of patients. In chronic hepatitis B, prolongation of IFN-alpha therapy up to 32 weeks is superior to a standard course of 16 weeks. Those patients who exhibit a low level of viral replication at the end of the standard regimen benefit most from prolonged treatment.
The first facial expression recognition and analysis challenge
Automatic Facial Expression Recognition and Analysis, in particular FACS Action Unit (AU) detection and discrete emotion detection, has been an active topic in computer science for over two decades. Standardisation and comparability has come some way; for instance, there exist a number of commonly used facial expression databases. However, lack of a common evaluation protocol and lack of sufficient details to reproduce the reported individual results make it difficult to compare systems to each other. This in turn hinders the progress of the field. A periodical challenge in Facial Expression Recognition and Analysis would allow this comparison in a fair manner. It would clarify how far the field has come, and would allow us to identify new goals, challenges and targets. In this paper we present the first challenge in automatic recognition of facial expressions to be held during the IEEE conference on Face and Gesture Recognition 2011, in Santa Barbara, California. Two sub-challenges are defined: one on AU detection and another on discrete emotion detection. It outlines the evaluation protocol, the data used, and the results of a baseline method for the two sub-challenges.
Sentinel lymph node biopsy in staging small (up to 15 mm) breast carcinomas. Results from a European multi-institutional study
Sentinel lymph node (SLN) biopsy has become the preferred method for the nodal staging of early breast cancer, but controversy exists regarding its universal use and consequences in small tumors. 2929 cases of breast carcinomas not larger than 15 mm and staged with SLN biopsy with or without axillary dissection were collected from the authors′ institutions. The pathology of the SLNs included multilevel hematoxylin and eosin (HE) staining. Cytokeratin immunohistochemistry (IHC) was commonly used for cases negative with HE staining. Variables influencing SLN involvement and non-SLN involvement were studied with logistic regression. Factors that influenced SLN involvement included tumor size, multifocality, grade and age. Small tumors up to 4 mm (including in situ and microinvasive carcinomas) seem to have SLN involvement in less than 10%. Non-SLN metastases were associated with tumor grade, the ratio of involved SLNs and SLN involvement type. Isolated tumor cells were not likely to be associated with further nodal load, whereas micrometastases had some subsets with low risk of non-SLN involvement and subsets with higher proportion of further nodal spread. In situ and microinvasive carcinomas have a very low risk of SLN involvement, therefore, these tumors might not need SLN biopsy for staging, and this may be the approach used for very small invasive carcinomas. If an SLN is involved, isolated tumor cells are rarely if ever associated with non-SLN metastases, and subsets of micrometastatic SLN involvement may be approached similarly. With macrometastases the risk of non-SLN involvement increases, and further axillary treatment should be generally indicated.
A Review of Technical Approaches to Realizing Near-Field Communication Mobile Payments
This article describes and compares four approaches to storing payment keys and executing payment applications on mobile phones via near-field communication at the point of sale. Even though the comparison hinges on security--specifically, how well the keys and payment application are protected against misuse--other criteria such as hardware requirements, availability, management complexity, and performance are also identified and discussed.
The responsible manager.
This contractor document was prepared for the U.S. Department of Energy (DOE), but has not undergone programmatic, policy, or publication review, and is provided for information only. The document provides preliminary information that may change based on new information or analysis, and represents a conservative treatment of parameters and assumptions to be used specifically for Total System Performance Assessment analyses. The document is a preliminary lower level contractor document and is not intended for publication or wide distribution.
Health care usage in Dutch systemic lupus erythematosus patients.
As a first step in the improvement of the organization of care for patients with systemic lupus erythematosus (SLE) we studied their health care usage and its determinants. A questionnaire was sent to 161 outpatients of the rheumatology clinic of a Dutch university hospital. The questionnaire comprised questions on health care usage, quality of life and sociodemographic characteristics. Disease characteristics were extracted from the medical record. Among the 102 responders (63% response rate) the proportions of patients reporting contacts with a rheumatologist because of SLE since onset of the disease and over the past 12 months were 100% and 83%, respectively. These proportions were 93% and 68% for all other medical specialists, 88% and 44% for the general practitioner, 78% and 44% for any health professional, 29% and 9% for care at home, 48% and 17% for hospital admissions and 29% and 2% for day-patient care. Younger age, major organ involvement, the use of immunosuppressants and worse physical functioning were found to be significantly associated with greater health care use. This study demonstrated that health care usage by SLE patients is substantial and involves a variety of health care services. Further research should be directed at patients' satisfaction and patients' needs regarding the optimal organization of integrated, multidisciplinary services that are accessible for SLE patients of all ages.
Economic Interests and Sectoral Relations: The Undevelopment of Capitalism in Fifteenth‐Century Tuscany1
Many preconditions for a rapid transition to industrial capitalism existed in Tuscany in the late medieval/early modern period, including relatively efficient agricultural production; a well‐developed, commercial manufacturing sector; the absence of a powerful feudal nobility and feudal obligations; a large, precocious urban economy; and the development of a territorial state. No such transition occurred, however. Previous explanations for this are inadequate, because they discount the strength of the Tuscany economy or downplay the presence of these preconditions. To explain the Tuscan outcome, this article draws on sectoral theories (agriculture vs. manufacturing) from the neoclassical and Marxist literature. Since these theories often give the wrong prediction, because they are based on formal attributes of actors, the author combines them with a Weberian conceptualization of substantively specific economic interests.
Brain-computer interfaces based on the steady-state visual-evoked response.
The Air Force Research Laboratory has implemented and evaluated two brain-computer interfaces (BCI's) that translate the steady-state visual evoked response into a control signal for operating a physical device or computer program. In one approach, operators self-regulate the brain response; the other approach uses multiple evoked responses.
ภูมิหลังการตั้งถิ่นฐานกลุ่มชาติพันธุ์ไท-ลาว และการเคลื่อนย้ายสู่ประเทศไทย ส่วนที่ 2 Historical Background of Tai-Lao Settlement and Movement to Thailand Part 2
บทคดยอ “ภมหลงการตงถนฐานกลมชาตพนธไท-ลาว และการเคลอนยายสประเทศไทย” เปนสวนหนงของการศกษาในชดโครงการวจยศาสตราจารยวจยดเดน ศาสตราจารยอรศร ปาณนท หวขอ “การศกษาแบบองครวมของการปรบตวในบรบทใหญทแตกตางของกลมชาตพนธไท-ลาว ในพนทลมนำภาคกลางของประเทศไทย (Holistic Study for the  Adaptability in the Different Contest of Tai-Lao Ethnic in the Central Region Basin of Thailand)” การศกษานจงมงประเดนเรองการประวตศาสตรถนฐานดงเดม และกระบวนการเคลอนตวของกลมชาตพนธไท-ลาวมาสประเทศไทยในบรบทของประวตศาสตร เพอเปนพนฐานสำหรบการสรางความเขาใจในระดบถดไปของปรากฏการณทสบเนองหลงจากการเขามาตงถนฐานในประเทศไทยของกลมชาตพนธไท-ลาว อาทกลม ลาวเวยง ลาวครง ลาวพวน และลาวโซง (ไทดำ) ซงเปนประชากรกลมเปาหมายหลกของชดโครงการวจยฯ จากการศกษาขอมลทางประวตศาสตรรวมกบการศกษาภาคสนาม เพอศกษาขอมลในประเดนดานภมศาสตร พบวา การเคลอนยายของกลมชาตพนธไท-ลาวมาสลมนำภาคกลางของไทยนน มไดเปนไปดวยความยนยอมพรอมใจ หากแตเปนผลกระทบทเกดขนจากสงครามขยายอำนาจของสยาม การตงถนฐานรวมกนเปนชมชนของกลมไทดำ ไทยวน ลาวพวน ลาวเวยง ลาวครง ในลมนำภาคกลางเกดจากการกำหนดโดยมลนายเปนสำคญ กลมชาตพนธจงไมมโอกาสเลอกพนทเพอตงถนฐานแรกเรมไดเอง ถนฐานใหมในลมนำภาคกลางและถนฐานเดมในลาวและเวยดนาม มสภาพแวดลอมทางธรรมชาตและระบบนเวศวทยาทแตกตางกนสนเชง การตงถนฐานบนแผนดนทไมคนเคยจงตองปรบตวในระบบการดำเนนชวตแบบใหม แตกทำใหเกดความสมพนธทแนนแฟนกนในชมชนและเครอขายคนในกลมชาตพนธเดยวกน จนสามารถธำรงรกษาวฒนธรรม ประเพณ และภาษาทนาสนใจอยางยง อกทงยงทำใหมการปรบตวของโครงสรางทางเศรษฐกจ และสงคมครงใหญในสมยรตนโกสนทรตอนตนดวย Abstract Historical background of the settlement of Tai-Lao ethnic group and the movement to Thailand is one part of the research project on “Holistic Study for the  Adaptability in the Different Contest of Tai-Lao Ethnic in the Central Region Basin of Thailand”, by Professor Ornsiri Panin. The study aims to study the traditional and historical settlement as well as the movement of Tai-Lao ethnic group to Thailand. Historical context is the basis for the understanding of the settlement of this group, whose main population comprised Lao Wieng, Lao Khrang, Lao Phuan, and Lao Song (Tai Dum). It was found from the historical documents and field studies in terms of geographical aspects that the movement of Tai-Lao ethnic group to the basins in central Thailand was not made by the willingness, but by the effects of war and Siamese power. The communal settlement of Tai Dum, Tai Yuan, Lao Wieng, Lao Khrang, and Lao Phuan was specified by the noble head, so the people could not select the areas to settle by themselves. The new settlement in the areas of basins in central Thailand and their origins in Laos and Viet Nam was totally different in terms of natural surroundings and ecological systems. As a result, the people needed to adjust themselves to new living circumstances. However, relationship and network were also made and became their cultures, social structure in the early Rattanakosin period.
Automatic Content-Aware Color and Tone Stylization
We introduce a new technique that automatically generates diverse, visually compelling stylizations for a photograph in an unsupervised manner. We achieve this by learning style ranking for a given input using a large photo collection and selecting a diverse subset of matching styles for final style transfer. We also propose an improved technique that transfers the global color and tone of the chosen exemplars to the input photograph while avoiding the common visual artifacts produced by the existing style transfer methods. Together, our style selection and transfer techniques produce compelling, artifact-free results on a wide range of input photographs, and a user study shows that our results are preferred over other techniques.
Systemic capillary leak syndrome associated with compartment syndrome.
Systemic capillary leak syndrome is characterized by recurrent hypovolemic shock attributable to increased systemic capillary leakage. A 26-year-old woman was admitted because of recurrent episodes of hypovolemic shock. Hemoconcentration, hypoalbuminemia, and monoclonal gammopathy were observed. We diagnosed systemic capillary leak syndrome. Three years later, she again had an attack of systemic capillary leak syndrome complicated with pretibial compartment syndrome. This case emphasizes the importance of muscle compartment pressure monitoring during volume resuscitation in patients with systemic capillary leak syndrome.
Kinetics of phase transformations in the peridynamic formulation of continuum mechanics
We study the kinetics of phase transformations in solids using the peridynamic formulation of continuum mechanics. The peridynamic theory is a nonlocal formulation that does not involve spatial derivatives, and is a powerful tool to study defects such as cracks and interfaces. We apply the peridynamic formulation to the motion of phase boundaries in one dimension. We show that unlike the classical continuum theory, the peridynamic formulation does not require any extraneous constitutive laws such as the kinetic relation (the relation between the velocity of the interface and the thermodynamic driving force acting across it) or the nucleation criterion (the criterion that determines whether a new phase arises from a single phase). Instead this information is obtained from inside the theory simply by specifying the inter-particle interaction. We derive a nucleation criterion by examining nucleation as a dynamic instability. We find the induced kinetic relation by analyzing the solutions of impact and release problems, and also directly by viewing phase boundaries as traveling waves. We also study the interaction of a phase boundary with an elastic non-transforming inclusion in two dimensions. We find that phase boundaries remain essentially planar with little bowing. Further, we find a new mechanism whereby acoustic waves ahead of the phase boundary nucleate new phase boundaries at the edges of the inclusion while the original phase boundary slows down or stops. Transformation proceeds as the freshly nucleated phase boundaries propagate leaving behind some untransformed martensite around the inclusion. r 2006 Elsevier Ltd. All rights reserved.
Towards a Deeper Understanding of Adversarial Losses
Recent work has proposed various adversarial losses for training generative adversarial networks. Yet, it remains unclear what certain types of functions are valid adversarial loss functions, and how these loss functions perform against one another. In this paper, we aim to gain a deeper understanding of adversarial losses by decoupling the effects of their component functions and regularization terms. We first derive some necessary and sufficient conditions of the component functions such that the adversarial loss is a divergence-like measure between the data and the model distributions. In order to systematically compare different adversarial losses, we then propose DANTest—a new, simple framework based on discriminative adversarial networks. With this framework, we evaluate an extensive set of adversarial losses by combining different component functions and regularization approaches. This study leads to some new insights into the adversarial losses. For reproducibility, all source code is available at https: //github.com/salu133445/dan.
Risk scoring system to predict contrast induced nephropathy following percutaneous coronary intervention.
BACKGROUND Contrast induced nephropathy (CIN) is associated with significant morbidity and mortality after percutaneous coronary intervention (PCI). The aim of this study is to evaluate the collective probability of CIN in Indian population by developing a scoring system of several identified risk factors in patients undergoing PCI. METHODS This is a prospective single center study of 1200 consecutive patients who underwent PCI from 2008 to 2011. Patients were randomized in 3:1 ratio into development (n = 900) and validation (n = 300) groups. CIN was defined as an increase of ≥25% and/or ≥0.5 mg/dl in serum creatinine at 48 hours after PCI when compared to baseline value. Seven independent predictors of CIN were identified using logistic regression analysis - amount of contrast, diabetes with microangiopathy, hypotension, peripheral vascular disease, albuminuria, glomerular filtration rate (GFR) and anemia. A formula was then developed to identify the probability of CIN using the logistic regression equation. RESULTS The mean (±SD) age was 57.3 (±10.2) years. 83.6% were males. The total incidence of CIN was 9.7% in the development group. The total risk of renal replacement therapy in the study group is 1.1%. Mortality is 0.5%. The risk scoring model correlated well in the validation group (incidence of CIN was 8.7%, sensitivity 92.3%, specificity 82.1%, c statistic 0.95). CONCLUSION A simple risk scoring equation can be employed to predict the probability of CIN following PCI, applying it to each individual. More vigilant preventive measures can be applied to the high risk candidates.
Biomechanical Design of a Powered Ankle-Foot Prosthesis
Although the potential benefits of a powered ankle-foot prosthesis have been well documented, no one has successfully developed and verified that such a prosthesis can improve amputee gait compared to a conventional passive-elastic prosthesis. One of the main hurdles that hinder such a development is the challenge of building an ankle-foot prosthesis that matches the size and weight of the intact ankle, but still provides a sufficiently large instantaneous power output and torque to propel an amputee. In this paper, we present a novel, powered ankle-foot prosthesis that overcomes these design challenges. The prosthesis comprises an unidirectional spring, configured in parallel with a force-controllable actuator with series elasticity. With this architecture, the ankle-foot prosthesis matches the size and weight of the human ankle, and is shown to be satisfying the restrictive design specifications dictated by normal human ankle walking biomechanics.
Fast and accurate analysis of scanning slotted waveguide arrays
An iterative procedure is presented for a fast and accurate analysis of scanning slotted waveguide arrays including the effects of mutual couplings and their dependence on the scanning angle. The analysis takes also into account the input transitions from the feeding network; each slotted waveguide being excited either at one end or at the centre. The procedure is applied to the simple case of a linear array of four slots, for which a full-wave analysis could be carried out for comparison. The proposed method is shown to provide excellent agreement with the full-wave computation still requiring extremely short computation times.
New immature hominin fossil from European Lower Pleistocene shows the earliest evidence of a modern human dental development pattern.
Here we present data concerning the pattern of dental development derived from the microcomputed tomography (microCT) study of a recently discovered immature hominin mandible with a mixed dentition recovered from the TD6 level of the Gran Dolina Lower Pleistocene cave site in Sierra de Atapuerca, northern Spain. These data confirm our previous results that nearly 1 million years ago at least one European hominin species had a fully modern pattern of dental development with a clear slowdown in the development of the molar field regarding the anterior dental field. Furthermore, using available information about enamel formation times and root extension rates in chimpanzees, early hominins, and modern humans, we have estimated that the formation time of the upper and lower first molars of individual 5 (H5) from TD6, which had just erupted at the time of the death of this individual, ranges between 5.3 and 6.6 y. Therefore, the eruption time of the first permanent molars (M1) in the TD6 hominins was within the range of variation of modern human populations. Because the time of M1 eruption in primates is a robust marker of life history, we suggest, as a working hypothesis, that these hominins had a prolonged childhood in the range of the variation of modern humans. If this hypothesis is true, it implies that the appearance in Homo of this important developmental biological feature and an associated increase in brain size preceded the development of the neocortical areas leading to the cognitive capabilities that are thought to be exclusive to Homo sapiens.
Automating extract class refactoring: an improved method and its evaluation
During software evolution the internal structure of the system undergoes continuous modifications. These continuous changes push away the source code from its original design, often reducing its quality, including class cohesion. In this paper we propose a method for automating the Extract Class refactoring. The proposed approach analyzes (structural and semantic) relationships between the methods in a class to identify chains of strongly related methods. The identified method chains are used to define new classes with higher cohesion than the original class, while preserving the overall coupling between the new classes and the classes interacting with the original class. The proposed approach has been first assessed in an artificial scenario in order to calibrate the parameters of the approach. The data was also used to compare the new approach with previous work. Then it has been empirically evaluated on real Blobs from existing open source systems in order to assess how good and useful the proposed refactoring solutions are considered by software engineers and how well the proposed refactorings approximate refactorings done by the original developers. We found that the new approach outperforms a previously proposed approach and that developers find the proposed solutions useful in guiding refactorings.
A blueprint for democratic citizenship education in South African public schools : African teachers' perceptions of good citizenship
The notion that South African public schools have a distinctively civic mission is recognised in all national education policy documents published since the first democratic election in 1994. The teaching of democratic citizenship education in public schools is a newcomer to South Africa. The purpose in this article is to summarise scholars' views on the attributes of a good citizen and the role of the school in this regard and to report the outcomes of a research project on African teachers' perceptions on the factors contributing to good citizenship. Ascertaining what scholars and African teachers thought would provide a reasonable starting point for addressing the issue of education for democratic citizenship in South African public schools.
Distinct autophagosomal-lysosomal fusion mechanism revealed by thapsigargin-induced autophagy arrest.
Autophagy, a catabolic pathway that delivers cellular components to lysosomes for degradation, can be activated by stressful conditions such as nutrient starvation and endoplasmic reticulum (ER) stress. We report that thapsigargin, an ER stressor widely used to induce autophagy, in fact blocks autophagy. Thapsigargin does not affect autophagosome formation but leads to accumulation of mature autophagosomes by blocking autophagosome fusion with the endocytic system. Strikingly, thapsigargin has no effect on endocytosis-mediated degradation of epidermal growth factor receptor. Molecularly, while both Rab7 and Vps16 are essential regulatory components for endocytic fusion with lysosomes, we found that Rab7 but not Vps16 is required for complete autophagy flux, and that thapsigargin blocks recruitment of Rab7 to autophagosomes. Therefore, autophagosomal-lysosomal fusion must be governed by a distinct molecular mechanism compared to general endocytic fusion.
GPS Error Correction With Pseudorange Evaluation Using Three-Dimensional Maps
The accuracy of the positions of a pedestrian is very important and useful information for the statistics, advertisement, and safety of different applications. Although the GPS chip in a smartphone is currently the most convenient device to obtain the positions, it still suffers from the effect of multipath and nonline-of-sight propagation in urban canyons. These reflections could greatly degrade the performance of a GPS receiver. This paper describes an approach to estimate a pedestrian position by the aid of a 3-D map and a ray-tracing method. The proposed approach first distributes the numbers of position candidates around a reference position. The weighting of the position candidates is evaluated based on the similarity between the simulated pseudorange and the observed pseudorange. Simulated pseudoranges are calculated using a ray-tracing simulation and a 3-D map. Finally, the proposed method was verified through field experiments in an urban canyon in Tokyo. According to the results, the proposed approach successfully estimates the reflection and direct paths so that the estimate appears very close to the ground truth, whereas the result of a commercial GPS receiver is far from the ground truth. The results show that the proposed method has a smaller error distance than the conventional method.
Software-Defined Networking: State of the Art and Research Challenges
Plug-and-play information technology (IT) infrastructure has been expanding very rapidly in recent years. With the advent of cloud computing, many ecosystem and business paradigms are encountering potential changes and may be able to eliminate their IT infrastructure maintenance processes. Real-time performance and high availability requirements have induced telecom networks to adopt the new concepts of the cloud model: software-defined networking (SDN) and network function virtualization (NFV). NFV introduces and deploys new network functions in an open and standardized IT environment, while SDN aims to transform the way networks function. SDN and NFV are complementary technologies; they do not depend on each other. However, both concepts can be merged and have the potential to mitigate the challenges of legacy networks. In this paper, our aim is to describe the benefits of using SDN in a multitude of environments such as in data centers, data center networks, and Network as Service offerings. We also present the various challenges facing SDN, from scalability to reliability and security concerns, and discuss existing solutions to these challenges. Keywords—Software-Defined Networking, OpenFlow, Datacenters, Network as a Service, Network Function Virtualization.
Series Viscoelastic Actuators Can Match Human Force Perception
Series elastic actuators (SEAs) are frequently used for force control in haptic interaction, because they decouple actuator inertia from the end effector by a compliant element. This element is usually a metal spring or beam, where the static force-deformation relationship offers a cheap force sensor. For high-precision force control, however, the remaining small inertia of this elastic element and of the end effector still limit the sensing performance and rendering transparency. Here, we extend the concept to deformable end effectors manufactured of viscoelastic materials. These materials offer the advantage of extremely low mass at high maximum deformation and applicable load. However, force and deformation are no longer statically related, and history of force and deformation has to be accounted for. We describe an observer-based solution, which allows drift-free force measurement with high accuracy and precision. Although the description of the viscoelastic behavior involves higher-order derivatives, the proposed observer does not require any numerical differentiation. This new integrated concept of sensing and actuation, called series viscoelastic actuator (SVA), is applied to our high-precision haptic device OSVALD, which is targeted at perception experiments that require sensing and rendering of forces in the range of the human tactile threshold. User-device interaction force is controlled using state-of-the-art control strategies of SEAs. Force estimation and force control performance are evaluated experimentally and prove to be compatible with the intended applications, showing that SVAs open up new possibilities for the use of series compliance and damping in high-precision haptic interfaces.
Integrated device for combined optical neuromodulation and electrical recording for chronic in vivo applications.
Studying brain function and its local circuit dynamics requires neural interfaces that can record and stimulate the brain with high spatiotemporal resolution. Optogenetics, a technique that genetically targets specific neurons to express light-sensitive channel proteins, provides the capability to control central nervous system neuronal activity in mammals with millisecond time precision. This technique enables precise optical stimulation of neurons and simultaneous monitoring of neural response by electrophysiological means, both in the vicinity of and distant to the stimulation site. We previously demonstrated, in vitro, the dual capability (optical delivery and electrical recording) while testing a novel hybrid device (optrode-MEA), which incorporates a tapered coaxial optical electrode (optrode) and a 100 element microelectrode array (MEA). Here we report a fully chronic implant of a new version of this device in ChR2-expressing rats, and demonstrate its use in freely moving animals over periods up to 8 months. In its present configuration, we show the device delivering optical excitation to a single cortical site while mapping the neural response from the surrounding 30 channels of the 6 × 6 element MEA, thereby enabling recording of optically modulated single-unit and local field potential activity across several millimeters of the neocortical landscape.
Vascular effects, efficacy and safety of nintedanib in patients with advanced, refractory colorectal cancer: a prospective phase I subanalysis
Nintedanib is a potent, oral angiokinase inhibitor that targets VEGF, PDGF and FGF signalling, as well as RET and Flt3. The maximum tolerated dose of nintedanib was evaluated in a phase I study of treatment-refractory patients with advanced solid tumours. In this preplanned subanalysis, the effect of nintedanib on the tumour vasculature, along with efficacy and safety, was assessed in 30 patients with colorectal cancer (CRC). Patients with advanced CRC who had failed conventional treatment, or for whom no therapy of proven efficacy existed, were treated with nintedanib ranging from 50–450 mg once-daily (n = 14) or 150–250 mg twice-daily (n = 16) for 28 days. After a 1-week rest, further courses were permitted in the absence of progression or undue toxicity. The primary objective was the effect on the tumour vasculature using dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) and expressed as the initial area under the DCE-MRI contrast agent concentration–time curve after 60 seconds (iAUC60) or the volume transfer constant between blood plasma and extravascular extracellular space (Ktrans). Patients received a median of 4.0 courses (range: 1–13). Among 21 evaluable patients, 14 (67%) had a ≥40% reduction from baseline in Ktrans and 13 (62%) had a ≥40% decrease from baseline in iAUC60, representing clinically relevant effects on tumour blood flow and permeability, respectively. A ≥40% reduction from baseline in Ktrans was positively associated with non-progressive tumour status (Fisher’s exact: p = 0.0032). One patient achieved a partial response at 250 mg twice-daily and 24 (80%) achieved stable disease lasting ≥8 weeks. Time to tumour progression (TTP) at 4 months was 26% and median TTP was 72.5 days (95% confidence interval: 65–114). Common drug-related adverse events (AEs) included nausea (67%), vomiting (53%) and diarrhoea (40%); three patients experienced drug-related AEs ≥ grade 3. Four patients treated with nintedanib once-daily had an alanine aminotransferase and/or aspartate aminotransferase increase ≥ grade 3. No increases > grade 2 were seen in the twice-daily group. Nintedanib modulates tumour blood flow and permeability in patients with advanced, refractory CRC, while achieving antitumour activity and maintaining an acceptable safety profile.
Display supersampling
Supersampling is widely used by graphics hardware to render anti-aliased images. In conventional supersampling, multiple scene samples are computationally combined to produce a single screen pixel. We consider a novel imaging paradigm that we call display supersampling, where multiple display samples are physically combined via the superimposition of multiple image subframes. Conventional anti-aliasing and texture mapping techniques are shown inadequate for the task of rendering high-quality images on supersampled displays. Instead of requiring anti-aliasing filters, supersampled displays actually require alias generation filters to cancel the aliasing introduced by nonuniform sampling. We present fundamental theory and efficient algorithms for the real-time rendering of high-resolution anti-aliased images on supersampled displays. We show that significant image quality gains are achievable by taking advantage of display supersampling. We prove that alias-free resolution beyond the Nyquist limits of a single subframe may be achieved by designing a bank of alias-canceling rendering filters. In addition, we derive a practical noniterative filter bank approach to real-time rendering and discuss implementations on commodity graphics hardware.
Prospective crossover comparison of carvedilol and metoprolol in patients with chronic heart failure.
OBJECTIVES This study investigates the effects of a change of beta-adrenergic blocking agent treatment from metoprolol to carvedilol and vice versa in patients with heart failure (HF). BACKGROUND Beta-blockers improve ventricular function and prolong survival in patients with HF. It has recently been suggested that carvedilol has more pronounced effects on left ventricular ejection fraction (LVEF) compared with metoprolol. It is uncertain whether a change from one beta-blocker to the other is safe and leads to any change of left ventricular function. METHODS Forty-four patients with HF due to ischemic (n = 17) or idiopathic cardiomyopathy (n = 27) that had responded well to long-term treatment with either metoprolol (n = 20) or carvedilol (n = 24) were switched to an equivalent dose of the respective other beta-blocker. Before and six months after crossover of treatment, echocardiography, radionuclide ventriculography and dobutamine stress echocardiography were performed. RESULTS Six months after crossover of beta-blocker treatment, LVEF had further improved with both carvedilol and metoprolol (carvedilol: 32 +/- 3% to 36 +/- 4%; metoprolol: 27 +/- 4% to 30 +/- 5%; both p < 0.05 vs. baseline), without interindividual differences. There were no changes in either New York Heart Association functional class or any other hemodynamic parameters at rest. Dobutamine stress echocardiography revealed a more pronounced increase of heart rate after dobutamine infusion in metoprolol- compared with carvedilol-treated patients. After dobutamine infusion, LVEF increased in the carvedilol- but not in the metoprolol-treated group. CONCLUSIONS When switching treatment from one beta-blocker to the other, improvement of LVEF in patients with HF is maintained. Despite similar long-term effects on hemodynamics at rest, beta-adrenergic responsiveness is different in both treatments.
Present and Future Directions in Data Warehousing
Many large organizations have developed data warehouses to support decision making. The data in a warehouse are subject oriented, integrated, time variant, and nonvolatile. A data warehouse contains five types of data: current detail data, older detail data, lightly summarized data, highly summarized data, and metadata. The architecture of a data warehouse includes a backend process (the extraction of data from source systems), the warehouse, and the front-end use (the accessing of data from the warehouse). A data mart is a smaller version of a data warehouse that supports the narrower set of requirements of a single business unit. Data marts should be developed in an integrated manner in order to avoid repeating the "silos of information" problem.An operational data store is a database for transaction processing systems that uses the data warehouse approach to provide clean data. Data warehousing is constantly changing, with the associated opportunities for practice and research, such as the potential for knowledge management using the warehouse.
Denial of Service Defence for Resource Availability in Wireless Sensor Networks
Wireless sensor networks (WSN) over the years have become one of the most promising networking solutions with exciting new applications for the near future. Its deployment has been enhanced by its small, inexpensive, and smart sensor nodes, which are easily deployed, depending on its application and coverage area. Common applications include its use for military operations, monitoring environmental conditions (such as volcano detection, agriculture, and management), distributed control systems, healthcare, and the detection of radioactive sources. Notwithstanding its promising attributes, security in WSN is a big challenge and remains an ongoing research trend. Deployed sensor nodes are vulnerable to various security attacks due to its architecture, hostile deployment location, and insecure routing protocol. Furthermore, the sensor nodes in WSNs are characterized by their resource constraints, such as, limited energy, low bandwidth, short communication range, limited processing, and storage capacity, which have made the sensor nodes an easy target. Therefore, in this paper, we present a review of denial of service attacks that affect resource availability in WSN and their countermeasure by presenting a taxonomy. Future research directions and open research issues are also discussed.
A comparison of the original chronic respiratory questionnaire with a standardized version.
BACKGROUND AND OBJECTIVES The chronic respiratory questionnaire (CRQ), a widely used measure of health-related quality of life (HRQL) in patients with chronic airflow limitation, includes an individualized dyspnea domain (patients identify five important activities, and report the degree of dyspnea on a 7-point scale). Because the individualized domain is unwieldy in multicenter clinical trials, we developed a standardized version and tested its discriminative and evaluative properties. METHODS We enrolled 51 patients who completed the standardized and individualized CRQ before starting a respiratory rehabilitation program, and again 3 months later. We calculated both cross-sectional and longitudinal correlations between the two versions and a number of other HRQL instruments, and tested the relative ability of the individualized and standardized versions of the CRQ to detect improvement with rehabilitation. RESULTS The results of the individualized questions suggested greater dysfunction (lower scores) than did the standardized questions both at baseline (3.18 vs 3.92, p < 0.001) and follow-up (4.62 vs 4.84, p = 0.051). The standardized dyspnea domain showed superior discriminative validity. While both techniques detected important, statistically significant improvement with rehabilitation (individualized domain mean change, 1.44; 95% confidence interval [CI], 1.11 to 1.77 [p < 0.001]; standardized domain mean change, 0.92; 95% CI, 0.61 to 1.24 [p < 0.01]), the difference in effect was substantial and statistically significant (mean difference, 0.52; 95% CI, 0.22 to 0.82; p = 0.001). The two versions showed comparable longitudinal validity. CONCLUSIONS A standardized version of the CRQ dyspnea domain improves the cross-sectional validity, maintains longitudinal validity, but reduces the responsiveness. By increasing sample size, investigators can use the more efficient standardized version of the CRQ without compromising validity.
A Monte Carlo Model of Light Propagation in Tissue
The Monte Carlo method is rapidly becoming the model of choice for simulating light transport in tissue. This paper provides all the details necessary for implementation of a Monte Carlo program. Variance reduction schemes that improve the efficiency of the Monte Carlo method are discussed. Analytic expressions facilitating convolution calculations for finite flat and Gaussian beams are included. Useful validation benchmarks are presented.