title
stringlengths
8
300
abstract
stringlengths
0
10k
Unsupervised Learning-Based Fast Beamforming Design for Downlink MIMO
In the downlink transmission scenario, power allocation and beamforming design at the transmitter are essential when using multiple antenna arrays. This paper considers a multiple input–multiple output broadcast channel to maximize the weighted sum-rate under the total power constraint. The classical weighted minimum mean-square error (WMMSE) algorithm can obtain suboptimal solutions but involves high computational complexity. To reduce this complexity, we propose a fast beamforming design method using unsupervised learning, which trains the deep neural network (DNN) offline and provides real-time service online only with simple neural network operations. The training process is based on an end-to-end method without labeled samples avoiding the complicated process of obtaining labels. Moreover, we use the “APoZ”-based pruning algorithm to compress the network volume, which further reduces the computational complexity and volume of the DNN, making it more suitable for low computation-capacity devices. Finally, the experimental results demonstrate that the proposed method improves computational speed significantly with performance close to the WMMSE algorithm.
Population Cost Prediction on Public Healthcare Datasets
The increasing availability of digital health records should ideally improve accountability in healthcare. In this context, the study of predictive modeling of healthcare costs forms a foundation for accountable care, at both population and individual patient-level care. In this research we use machine learning algorithms for accurate predictions of healthcare costs on publicly available claims and survey data. Specifically, we investigate the use of the regression trees, M5 model trees and random forest, to predict healthcare costs of individual patients given their prior medical (and cost) history. Overall, three observations showcase the utility of our research: (a) prior healthcare cost alone can be a good indicator for future healthcare cost, (b) M5 model tree technique led to very accurate future healthcare cost prediction, and (c) although state-of-the-art machine learning algorithms are also limited by skewed cost distributions in healthcare, for a large fraction (75%) of population, we were able to predict with higher accuracy using these algorithms. In particular, using M5 model trees we were able to accurately predict costs within less than $125 for 75% of the population when compared to prior techniques. Since models for predicting healthcare costs are often used to ascertain overall population health, our work is useful to evaluate future costs for large segments of disease populations with reasonably low error as demonstrated in our results on real-world publicly available datasets.
IS THE RESOURCE-BASED " VIEW " A USEFUL PERSPECTIVE FOR STRATEGIC MANAGEMENT RESEARCH ?
Here I examine each of the major issues raised by Priem and Butler (this issue) about my 1991 article and subsequent resource-based research. While it turns out that Priem and Butler's direct criticisms oi the 1991 article are unfounded, they do remind resource-based researchers of some important requirements of this kind of research. I also discuss some important issues not raised by Priem and Butler—the resolutions of which will be necessary if a more complete resource-based theory of strategic advantage is to be developed.
Multi-modal Factorized Bilinear Pooling with Co-attention Learning for Visual Question Answering
Visual question answering (VQA) is challenging because it requires a simultaneous understanding of both the visual content of images and the textual content of questions. The approaches used to represent the images and questions in a fine-grained manner and questions and to fuse these multimodal features play key roles in performance. Bilinear pooling based models have been shown to outperform traditional linear models for VQA, but their high-dimensional representations and high computational complexity may seriously limit their applicability in practice. For multimodal feature fusion, here we develop a Multi-modal Factorized Bilinear (MFB) pooling approach to efficiently and effectively combine multi-modal features, which results in superior performance for VQA compared with other bilinear pooling approaches. For fine-grained image and question representation, we develop a ‘co-attention’ mechanism using an end-to-end deep network architecture to jointly learn both the image and question attentions. Combining the proposed MFB approach with co-attention learning in a new network architecture provides a unified model for VQA. Our experimental results demonstrate that the single MFB with co-attention model achieves new state-of-theart performance on the real-world VQA dataset. Code available at https://github.com/yuzcccc/mfb.
Long-term efficacy and safety of adefovir dipivoxil for the treatment of hepatitis B e antigen-positive chronic hepatitis B.
UNLABELLED Treatment of 171 patients with hepatitis B e antigen (HBeAg)-positive chronic hepatitis B (CHB) with adefovir dipivoxil (ADV) 10 mg over 48 weeks resulted in significant histological, virological, serological, and biochemical improvement compared with placebo. The long-term efficacy and safety of ADV in a subset of these patients was investigated for up to 5 years. Sixty-five patients given ADV 10 mg in year 1 elected to continue in a long-term safety and efficacy study (LTSES). At enrollment, the 65 LTSES patients were a median 34 years old, 83% male, 74% Asian, 23% Caucasian, median baseline serum hepatitis B virus (HBV) DNA 8.45 log(10) copies/mL, and median baseline alanine aminotransferase (ALT) 2.0 x upper limit of normal. At 5 years on study, the median changes from baseline in serum HBV DNA and ALT for the 41 patients still on ADV were 4.05 log(10) copies/mL and -50 U/L, respectively. HBeAg loss and seroconversion were observed in 58% and 48% of patients by end of study, respectively. Fifteen patients had baseline and end of follow-up liver biopsies; improvements in necroinflammation and fibrosis were seen in 67% and 60% of these patients, respectively. Adefovir resistance mutations A181V or N236T developed in 13 LTSES patients; the first observation was at study week 195. There were no serious adverse events related to ADV. CONCLUSION Treatment with ADV beyond 48 weeks was well tolerated and produced long-term virological, biochemical, serological, and histological improvement.
Lymphedema and lipedema - an overview of conservative treatment.
Lymphedema and lipedema are chronic progressive disorders for which no causal therapy exists so far. Many general practitioners will rarely see these disorders with the consequence that diagnosis is often delayed. The pathophysiological basis is edematization of the tissues. Lymphedema involves an impairment of lymph drainage with resultant fluid build-up. Lipedema arises from an orthostatic predisposition to edema in pathologically increased subcutaneous tissue. Treatment includes complex physical decongestion by manual lymph drainage and absolutely uncompromising compression therapy whether it is by bandage in the intensive phase to reduce edema or with a flat knit compression stocking to maintain volume.
Channel Estimation and Equalization for 5 G Wireless Communication Systems
In this thesis, channel estimation techniques are studied and investigated for a novel multicarrier modulation scheme, Universal Filtered Multi-Carrier (UFMC). UFMC (a.k.a. UFOFDM) is considered as a candidate for the 5th Generation of wireless communication systems, which aims at replacing OFDM and enhances system robustness and performance in relaxed synchronization condition e.g. time-frequency misalignment. Thus, it may more efficiently support Machine Type Communication (MTC) and Internet of Things (IoT), which are considered as challenging applications for next generation of wireless communication systems. There exist many methods of channel estimation, time-frequency synchronization and equalization for classical CP-OFDM systems. Pilot-aided methods known from CP-OFDM are adopted and applied to UFMC systems. The performance of UFMC is then compared with CP-OFDM.
Medical resource use, costs, and quality of life in patients with acute decompensated heart failure: findings from ASCEND-HF.
BACKGROUND The Acute Study of Clinical Effectiveness of Nesiritide in Decompensated Heart Failure (ASCEND-HF) randomly assigned 7,141 participants to nesiritide or placebo. Dyspnea improvement was more often reported in the nesiritide group, but there were no differences in 30-day all-cause mortality or heart failure readmission rates. We compared medical resource use, costs, and health utilities between the treatment groups. METHODS AND RESULTS There were no significant differences in inpatient days, procedures, and emergency department visits reported for the first 30 days or for readmissions to day 180. EQ-5D health utilities and visual analog scale ratings were similar at 24 hours, discharge, and 30 days. Billing data and regression models were used to generate inpatient costs. Mean length of stay from randomization to discharge was 8.5 days in the nesiritide group and 8.6 days in the placebo group (P = .33). Cumulative mean costs at 30 days were $16,922 (SD $16,191) for nesiritide and $16,063 (SD $15,572) for placebo (P = .03). At 180 days, cumulative costs were $25,590 (SD $30,344) for nesiritide and $25,339 (SD $29,613) for placebo (P = .58). CONCLUSIONS The addition of nesiritide contributed to higher short-term costs and did not significantly influence medical resource use or health utilities compared with standard care alone.
Beyond Trade-Off: Accelerate FCN-Based Face Detector with Higher Accuracy
Fully convolutional neural network (FCN) has been dominating the game of face detection task for a few years with its congenital capability of sliding-window-searching with shared kernels, which boiled down all the redundant calculation, and most recent state-of-the-art methods such as Faster-RCNN, SSD, YOLO and FPN use FCN as their backbone. So here comes one question: Can we find a universal strategy to further accelerate FCN with higher accuracy, so could accelerate all the recent FCN-based methods? To analyze this, we decompose the face searching space into two orthogonal directions, 'scale' and 'spatial'. Only a few coordinates in the space expanded by the two base vectors indicate foreground. So if FCN could ignore most of the other points, the searching space and false alarm should be significantly boiled down. Based on this philosophy, a novel method named scale estimation and spatial attention proposal (S2AP) is proposed to pay attention to some specific scales in image pyramid and valid locations in each scales layer. Furthermore, we adopt a masked-convolution operation based on the attention result to accelerate FCN calculation. Experiments show that FCN-based method RPN can be accelerated by about 4× with the help of S2AP and masked-FCN and at the same time it can also achieve the state-of-the-art on FDDB, AFW and MALF face detection benchmarks as well.
The forensic investigation of android private browsing sessions using orweb
The continued increase in the usage of Small Scale Digital Devices (SSDDs) to browse the web has made mobile devices a rich potential for digital evidence. Issues may arise when suspects attempt to hide their browsing habits using applications like Orweb - which intends to anonymize network traffic as well as ensure that no browsing history is saved on the device. In this work, the researchers conducted experiments to examine if digital evidence could be reconstructed when the Orweb browser is used as a tool to hide web browsing activates on an Android smartphone. Examinations were performed on both a non-rooted and a rooted Samsung Galaxy S2 smartphone running Android 2.3.3. The results show that without rooting the device, no private web browsing traces through Orweb were found. However, after rooting the device, the researchers were able to locate Orweb browser history, and important corroborative digital evidence was found.
A measurement system for wrist movements in biomedical applications
The design and proof of concept implementation of a biomedical measurement device specifically targeting human wrist movements is presented. The key aspects of development are the integrated measurement of wrist kinematics and lower arm muscle activities, wireless operation and the possibility of realtime data streaming. The designed system addresses these requirements using single chip 9 degrees-of-freedom inertial sensors for kinematic measurements, an active myoelectric electrode frontend design to record muscle activities and a Bluetooth communication interface for device control and data streaming. In addition to design considerations and proof of concept implementation, kinematic test measurement data is presented to validate system usability in a future wrist movement classification task.
Data Management for Journalism
We describe the power and potential of data journalism, where news stories are reported and published with data and dynamic visualizations. We discuss the challenges facing data journalism today and how recent data management tools such as Google Fusion Tables have helped in the newsroom. We then describe some of the challenges that need to be addressed in order for data journalism to reach its full
Spatially-partitioned environmental representation and planning architecture for on-road autonomous driving
Conventional layered planning architecture temporally partitions the spatiotemporal motion planning by the path and speed, which is not suitable for lane change and overtaking scenarios with moving obstacles. In this paper, we propose to spatially partition the motion planning by longitudinal and lateral motions along the rough reference path in the Frenét Frame, which makes it possible to create linearized safety constraints for each layer in a variety of on-road driving scenarios. A generic environmental representation methodology is proposed with three topological elements and corresponding longitudinal constraints to compose all driving scenarios mentioned in this paper according to the overlap between the potential path of the autonomous vehicle and predicted path of other road users. Planners combining A∗ search and quadratic programming (QP) are designed to plan both rough long-term longitudinal motions and short-term trajectories to exploit the advantages of both search-based and optimization-based methods. Limits of vehicle kinematics and dynamics are considered in the planners to handle extreme cases. Simulation results show that the proposed framework can plan collision-free motions with high driving quality under complicated scenarios and emergency situations.
The social thing in the curriculum of the formation of health professionals
The professional formation and health are social processes and values in which the social thing is placed in the core of both processes. The objectives of this paper were to determine the role of the social aspects in the curriculum of health professional formation in Cuba, to analyze the conception of the social thing and its contribution when integrating with the contents of the disciplines, and its
High reflectivity subwavelength metal grating for VCSEL applications
We report theoretical simulation of a novel silver subwavelength grating with reflectivity > 99.5%, substantially higher than uniform thin film, and a wide 99%-reflectivity bandwidth of 190nm, promising for VCSELs and surface-normal optoelectronic devices.
Jointly Parse and Fragment Ungrammatical Sentences
This paper is about detecting incorrect arcs in a dependency parse for sentences that contain grammar mistakes. Pruning these arcs results in well-formed parse fragments that can still be useful for downstream applications. We propose two automatic methods that jointly parse the ungrammatical sentence and prune the incorrect arcs: a parser retrained on a parallel corpus of ungrammatical sentences with their corrections, and a sequence-to-sequence method. Experimental results show that the proposed strategies are promising for detecting incorrect syntactic dependencies as well as incorrect semantic dependencies.
Stochastic Block Model and Community Detection in the Sparse Graphs: A spectral algorithm with optimal rate of recovery
In this paper, we present and analyze a simple and robust spectral algorithm for the stochastic block model with k blocks, for any k fixed. Our algorithm works with graphs having constant edge density, under an optimal condition on the gap between the density inside a block and the density between the blocks. As a co-product, we settle an open question posed by Abbe et. al. concerning censor block models.
Electronic Synoptic Operative Reporting for Thyroid Surgery using an Electronic Data Management System: Potential for Prospective Multicenter Data Collection
Electronic synoptic operative reports ensure systematic documentation of all critical components and findings during complex surgical procedures. Thyroid surgery lends itself to synoptic reporting, because there are a predefined number of essential intraoperative events and findings that every endocrine surgeon invariably records. An electronic web-based form (e-form) was designed and implemented to record operative data in a synoptic structure for thyroid surgery. The e-form was implemented as a pilot study from January 2008 to October 2009 for use by three attending surgeons. During this period, 514 e-forms were completed with 100% compliance, which recorded data from 384 total thyroidectomies and 130 thyroid lobectomies. All users found the e-form to be easy to use, comprehensive, and took less than 5 min to complete. The main advantages of a web-based e-form for synoptic recording of thyroid surgery are that it is user-friendly and easy to complete, yet comprehensive. Because it is based on a system available across institutions, it can be used as a minimum dataset and could be considered a national and international standard for wider use, especially if endorsed by the American or International Association for Endocrine Surgeons.
Transcranial magnetic stimulation of left prefrontal cortex impairs working memory
OBJECTIVES Several lines of evidence suggest that the prefrontal cortex is involved in working memory. Our goal was to determine whether transient functional disruption of the dorsolateral prefrontal cortex (DLPFC) would impair performance in a sequential-letter working memory task. METHODS Subjects were shown sequences of letters and asked to state whether the letter just displayed was the same as the one presented 3-back. Single-pulse transcranial magnetic stimulation (TMS) was applied over the DLPFC between letter presentations. RESULTS TMS applied over the left DLPFC resulted in increased errors relative to no TMS controls. TMS over the right DLPFC did not alter working memory performance. CONCLUSION Our results indicate that the left prefrontal cortex has a crucial role in at least one type of working memory.
Management of Large Erupting Complex Odontoma in Maxilla
We present the unusual case of a large complex odontoma erupting in the maxilla. Odontomas are benign developmental tumours of odontogenic origin. They are characterized by slow growth and nonaggressive behaviour. Complex odontomas, which erupt, are rare. They are usually asymptomatic and are identified on routine radiograph but may present with erosion into the oral cavity with subsequent cellulitis and facial asymmetry. This present paper describes the presentation and management of an erupting complex odontoma, occupying the maxillary sinus with extension to the infraorbital rim. We also discuss various surgical approaches used to access this anatomic area.
Combining Silhouettes, Surface, and Volume Rendering for Surgery Education and Planning
We introduce a flexible combination of volume, surface, and line rendering. We employ object-based edge detection because this allows a flexible parametrization of the generated lines. Our techniques were developed mainly for medical applications using segmented patient-individual volume datasets. In addition, we present an evaluation of the generated visualizations with 8 medical professionals and 25 laypersons. Integration of lines in conventional rendering turned out to be appropriate.
Performance indicators for an objective measure of public transport service quality
The measurement of transit performance represents a very useful tool for ensuring continuous increase of the quality of the delivered transit services, and for allocating resources among competing transit agencies. Transit service quality can be evaluated by subjective measures based on passengers’ perceptions, and objective measures represented by disaggregate performance measures expressed as numerical values, which must be compared with fixed standards or past performances. The proposed research work deals with service quality evaluation based on objective measures; specifically, an extensive overview and an interpretative review of the objective indicators until investigated by researchers are proposed. The final aim of the work is to give a review as comprehensive as possible of the objective indicators, and to provide some suggestions for the selection of the most appropriate indicators for evaluating a transit service aspect.
Randomized mammographic screening for breast cancer in Stockholm
In March 1981 a randomized single-view mammographic screening for breast cancer was started in the south of Stockholm. The screened population in the first round numbered 40,318 women, and 20,000 women served as a well-defined control group. The age groups represented were 40–64 years, and 80.7% of the invited women participated in the study. The first round disclosed 128 breast cancers (113 invasive and 15 noninvasive), or 4.0 per 1,000 women. Mean tumour size was 14.1 mm and axillary lymph node metastases were found in 21.8%. Fifty-five per cent of the tumours were small (⩽10 mm) or non-invasive, and 71% were stage I. Participation rates are high in all Swedish trials. The present results differ only slightly from other screening programs; the percentages of patients with axillary metastases and stage II tumours are similar in the Stockholm, Malmö and Kopparberg/Östergötland studies. Comparisons of cancer prevalence in the various Swedish screening trials show that, in comparable age groups, there are some differences, even when the differences in the natural cancer incidence are taken into account. A decreased mortality was found recently in a Swedish trial in ages above 50 years but not below. In the Stockholm study more than one-third of the participants were aged 40–49 years.
A Deeper Look into Dependency-Based Word Embeddings
• Unlabeled: Context constructed without dependency labels • Simplified: Functionally similar dependency labels are collapsed • Basic: Standard dependency parse • Enhanced and Enhanced++: Dependency trees augmented (e.g., new edges between modifiers and conjuncts with parents’ labels) • Universal Dependencies (UD): Cross-lingual • Stanford Dependencies (SD): English-tailored • Prior work [1] has shown that embeddings trained using dependency contexts distinguish related words better than similar words. • What effects do decisions made with embeddings have on the characteristics of the word embeddings? • Do Universal Dependency (UD) embeddings capture different characteristics than English-tailored Stanford Dependency (SD) embeddings?
Automatic Building of Synthetic Voices from Audio Books
Current state-of-the-art text-to-speech systems produce intelligible speech but lack the prosody of natural utterances. Building better models of prosody involves development of prosodically rich speech databases. However, development of such speech databases requires a large amount of effort and time. An alternative is to exploit story style monologues (long speech files) in audio books. These monologues already encapsulate rich prosody including varied intonation contours, pitch accents and phrasing patterns. Thus, audio books act as excellent candidates for building prosodic models and natural sounding synthetic voices. The processing of such audio books poses several challenges including segmentation of long speech files, detection of mispronunciations, extraction and evaluation of representations of prosody. In this thesis, we address the issues of segmentation of long speech files, capturing prosodic phrasing patterns of a speaker, and conversion of speaker characteristics. Techniques developed to address these issues include – text-driven and speech-driven methods for segmentation of long speech files; an unsupervised algorithm for learning speaker-specific phrasing patterns and a voice conversion method by modeling target speaker characteristics. The major conclusions of this thesis are – • Audio books can be used for building synthetic voices. Segmentation of such long speech files can be accomplished without the need for a speech recognition system. • The prosodic phrasing patterns are specific to a speaker. These can be learnt and incorporated to improve the quality of synthetic voices. • Conversion of speaker characteristics can be achieved by modeling speaker-specific features of a target speaker. Finally, the techniques developed in this thesis enable prosody research by leveraging a large number of audio books available in the public domain.
Giving Effectively : Inquiring Ukrainian private philanthropy foundations self-represented ‘effectiveness’
Giving Effectively : Inquiring Ukrainian private philanthropy foundations self-represented ‘effectiveness’
Estimating the Socio-Economic Impact of Product Reviews : Mining Text and Reviewer Characteristics
With the rapid growth of the Internet, the ability of users to create and publish content has created active electronic communities that provide a wealth of product information. However, the high volume of reviews that are typically published for a single product makes harder for individuals as well as manufacturers to locate the best reviews and understand the true underlying quality of a product. In this paper, we re-examine the impact of reviews on economic outcomes like product sales and see how different factors affect social outcomes like the extent of their perceived usefulness. Our approach explores multiple aspects of review text, such as lexical, grammatical, semantic, and stylistic levels to identify important text-based features. In addition, we also examine multiple reviewer-level features such as average usefulness of past reviews and the self-disclosed identity measures of reviewers that are displayed next to a review. Our econometric analysis reveals that the extent of subjectivity, informativeness, readability, and linguistic correctness in reviews matters in influencing sales and perceived usefulness. Reviews that have a mixture of objective, and highly subjective sentences have a negative effect on product sales, compared to reviews that tend to include only subjective or only objective information. However, such reviews are considered more informative (or helpful) by the users. By using Random Forest based classifiers, we show that we can accurately predict the impact of reviews on sales and their perceived usefulness. Reviews for products that have received widely fluctuating reviews, also have reviews of widely fluctuating helpfulness. In particular, we find that highly detailed and readable reviews can have low helpfulness votes in cases when users tend to vote negatively not because they disapprove of the review quality but rather to convey their disapproval of the review polarity. We examine the relative importance of the three broad feature categories: ‘reviewer-related’ features, ‘review subjectivity’ features, and ‘review readability’ features, and find that using any of the three feature sets results in a statistically equivalent performance as in the case of using all available features. This paper is the first study that integrates econometric, text mining, and predictive modeling techniques toward a more complete analysis of the information captured by user-generated online reviews in order to estimate their socio-economic impact. Our results can have implications for judicious design of opinion forums.
A workshop-oriented approach for defining electronic process guides: a case study
We introduce electronic process guides, and discuss their role in software engineering projects. We then present existing methods for constructing electronic process guides by defining a set of common processes for a company. Different approaches from the software engineering and management science are presented. We then go on to propose a new way of dealing with process description in software engineering: using process workshops as a tool to reach consensus on work practice. The main reason for this is to get realistic descriptions with accurate detail as well as company commitment in an efficient manner. We describe our workshop-oriented method to define processes, which we have used in small software companies, and show examples of results.
Copositive optimization - Recent developments and applications
Due to its versatility, copositive optimization receives increasing interest in the Operational Research community, and is a rapidly expanding and fertile field of research. It is a special case of conic optimization, which consists of minimizing a linear function over a cone subject to linear constraints. The diversity of copositive formulations in different domains of optimization is impressive, since problem classes both in the continuous and discrete world, as well as both deterministic and stochastic models are covered. Copositivity appears in local and global optimality conditions for quadratic optimization, but can also yield tighter bounds for NP-hard combinatorial optimization problems. Here some of the recent success stories are told, along with principles, algorithms and applications.
Extended Y chromosome investigation suggests postglacial migrations of modern humans into East Asia via the northern route.
Genetic diversity data, from Y chromosome and mitochondrial DNA as well as recent genome-wide autosomal single nucleotide polymorphisms, suggested that mainland Southeast Asia was the major geographic source of East Asian populations. However, these studies also detected Central-South Asia (CSA)- and/or West Eurasia (WE)-related genetic components in East Asia, implying either recent population admixture or ancient migrations via the proposed northern route. To trace the time period and geographic source of these CSA- and WE-related genetic components, we sampled 3,826 males (116 populations from China and 1 population from North Korea) and performed high-resolution genotyping according to the well-resolved Y chromosome phylogeny. Our data, in combination with the published East Asian Y-haplogroup data, show that there are four dominant haplogroups (accounting for 92.87% of the East Asian Y chromosomes), O-M175, D-M174, C-M130 (not including C5-M356), and N-M231, in both southern and northern East Asian populations, which is consistent with the proposed southern route of modern human origin in East Asia. However, there are other haplogroups (6.79% in total) (E-SRY4064, C5-M356, G-M201, H-M69, I-M170, J-P209, L-M20, Q-M242, R-M207, and T-M70) detected primarily in northern East Asian populations and were identified as Central-South Asian and/or West Eurasian origin based on the phylogeographic analysis. In particular, evidence of geographic distribution and Y chromosome short tandem repeat (Y-STR) diversity indicates that haplogroup Q-M242 (the ancestral haplogroup of the native American-specific haplogroup Q1a3a-M3) and R-M207 probably migrated into East Asia via the northern route. The age estimation of Y-STR variation within haplogroups suggests the existence of postglacial (∼18 Ka) migrations via the northern route as well as recent (∼3 Ka) population admixture. We propose that although the Paleolithic migrations via the southern route played a major role in modern human settlement in East Asia, there are ancient contributions, though limited, from WE, which partly explain the genetic divergence between current southern and northern East Asian populations.
Management of massive and nonmassive pulmonary embolism
Massive pulmonary embolism (PE) is characterized by systemic hypotension (defined as a systolic arterial pressure < 90 mm Hg or a drop in systolic arterial pressure of at least 40 mm Hg for at least 15 min which is not caused by new onset arrhythmias) or shock (manifested by evidence of tissue hypoperfusion and hypoxia, including an altered level of consciousness, oliguria, or cool, clammy extremities). Massive pulmonary embolism has a high mortality rate despite advances in diagnosis and therapy. A subgroup of patients with nonmassive PE who are hemodynamically stable but with right ventricular (RV) dysfunction or hypokinesis confirmed by echocardiography is classified as submassive PE. Their prognosis is different from that of others with non-massive PE and normal RV function. This article attempts to review the evidence-based risk stratification, diagnosis, initial stabilization, and management of massive and nonmassive pulmonary embolism.
Diffusion magnetic resonance imaging study of schizophrenia in the context of abnormal neurodevelopment using multiple site data in a Chinese Han population
Schizophrenia has increasingly been considered a neurodevelopmental disorder, and the advancement of neuroimaging techniques and associated computational methods has enabled quantitative re-examination of this important theory on the pathogenesis of the disease. Inspired by previous findings from neonatal brains, we proposed that an increase in diffusion magnetic resonance imaging (dMRI) mean diffusivity (MD) should be observed in the cerebral cortex of schizophrenia patients compared with healthy controls, corresponding to lower tissue complexity and potentially a failure to reach cortical maturation. We tested this hypothesis using dMRI data from a Chinese Han population comprising patients from four different hospital sites. Utilizing data-driven methods based on the state-of-the-art tensor-based registration algorithm, significantly increased MD measurements were consistently observed in the cortex of schizophrenia patients across all four sites, despite differences in psychopathology, exposure to antipsychotic medication and scanners used for image acquisition. Specifically, we found increased MD in the limbic system of the schizophrenic brain, mainly involving the bilateral insular and prefrontal cortices. In light of the existing literature, we speculate that this may represent a neuroanatomical signature of the disorder, reflecting microstructural deficits due to developmental abnormalities. Our findings not only provide strong support to the abnormal neurodevelopment theory of schizophrenia, but also highlight an important neuroimaging endophenotype for monitoring the developmental trajectory of high-risk subjects of the disease, thereby facilitating early detection and prevention.
Model Checking and Abstraction
We describe a method for using abstraction to reduce the complexity of temporal logic model checking. The basis of this method is a way of constructing an abstract model of a program without ever examining the corresponding unabstracted model. We show how this abstract model can be used to verify properties of the original program. We have implemented a system based on these techniques, and we demonstrate their practicality using a number of examples, including a pipelined ALU circuit with over 101300 states.
Stack-based scheduling of realtime processes
The Priority Ceiling Protocol (PCP) of Sha, Rajkumar and Lehoczky is a policy for locking binary semaphores that bounds priority inversion (i.e., the blocking of a job while a lower priority job executes), and thereby improves schedulability under fixed priority preemptive scheduling. We show how to extend the PCP to handle: multiunit resources, which subsume binary semaphores and reader-writer locks; dynamic priority schemes, such as earliest-deadline-first (EDF), that use static “preemption levels”; sharing of runtime stack space between jobs. These extensions can be applied independently, or together. The Stack Resource Policy (SRP) is a variant of the SRP that incorporates the three extensions mentioned above, plus the conservative assumption that each job may require the use of a shared stack. This avoids unnecessary context switches and allows the SRP to be implemented very simply using a stack. We prove a schedulability result for EDF scheduling with the SRP that is tighter than the one proved previously for EDF with a dynamic version of the PCP. The Minimal SRP (MSRP) is a slightly more complex variant of the SRP, which has similar properties, but imposes less blocking. The MSRP is optimal for stack sharing systems, in the sense that it is the least restrictive policy that strictly bounds priority inversion and prevents deadlock for rate monotone (RM) and earliest-deadline-first (EDF) scheduling.
MHTN: Modal-adversarial Hybrid Transfer Network for Cross-modal Retrieval
Cross-modal retrieval has drawn wide interest for retrieval across different modalities (such as text, image, video, audio, and 3-D model). However, existing methods based on a deep neural network often face the challenge of insufficient cross-modal training data, which limits the training effectiveness and easily leads to overfitting. Transfer learning is usually adopted for relieving the problem of insufficient training data, but it mainly focuses on knowledge transfer only from large-scale datasets as a single-modal source domain (such as ImageNet) to a single-modal target domain. In fact, such large-scale single-modal datasets also contain rich modal-independent semantic knowledge that can be shared across different modalities. Besides, large-scale cross-modal datasets are very labor-consuming to collect and label, so it is significant to fully exploit the knowledge in single-modal datasets for boosting cross-modal retrieval. To achieve the above goal, this paper proposes a modal-adversarial hybrid transfer network (MHTN), which aims to realize knowledge transfer from a single-modal source domain to a cross-modal target domain and learn cross-modal common representation. It is an end-to-end architecture with two subnetworks. First, a modal-sharing knowledge transfer subnetwork is proposed to jointly transfer knowledge from a single modality in the source domain to all modalities in the target domain with a star network structure, which distills modal-independent supplementary knowledge for promoting cross-modal common representation learning. Second, a modal-adversarial semantic learning subnetwork is proposed to construct an adversarial training mechanism between the common representation generator and modality discriminator, making the common representation discriminative for semantics but indiscriminative for modalities to enhance cross-modal semantic consistency during the transfer process. Comprehensive experiments on four widely used datasets show the effectiveness of MHTN.
A Markov based image forgery detection approach by analyzing CFA artifacts
The image acquisition device, the light is filtered through a Color Filter Array (CFA), where each pixel captures only one color (from Red, Green, and Blue), while others are calibrated. This process is known as interpolation process, and the artifacts introduced are called CFA or interpolation artifacts. The structure of these artifacts in the image is disturbed while a forgery is introduced in an image. In this paper, a high-order statistical approach is proposed to detect the inconsistencies in the artifacts of different parts of the image to expose any forgery present. The Markov Transition Probability Matrix (MTPM) is employed to develop various features that will detect the presence or absence of CFA artifacts in a particular region of the image. The Markov random process is applied because it provides an enhanced efficiency and reduced computational complexity for the forgery detection model. The algorithm is tested on 2 × 2 pixel block of the image which provides the results of a fine quality. There is no prior information of the location of the forged region of the image. The algorithm is tested on various images, taken from various social networking websites. The proposed forgery detection technique outperforms the existing state-of-the-art techniques for the different forgery scenarios by providing an average accuracy of 90.58%.
NCRF++: An Open-source Neural Sequence Labeling Toolkit
This paper describes NCRF++, a toolkit for neural sequence labeling. NCRF++ is designed for quick implementation of different neural sequence labeling models with a CRF inference layer. It provides users with an inference for building the custom model structure through configuration file with flexible neural feature design and utilization. Built on PyTorch1, the core operations are calculated in batch, making the toolkit efficient with the acceleration of GPU. It also includes the implementations of most state-of-the-art neural sequence labeling models such as LSTMCRF, facilitating reproducing and refinement on those methods.
The ROUGE-AR : A Proposed Extension to the ROUGE Evaluation Metric for Abstractive Text Summarization
Abstractive text summarization refers to summary generation that is based on semantic understanding, and is thus not strictly limited to the words found in the source. Despite its success in deep learning, however, the task of text summarization has no reliably effective metric for evaluating performance. In this paper, we describe the standard evaluative measure for abstractive text summarization, the ROUGE metric. We then propose our extension to the standard ROUGE measure, the ROUGE-AR. Drawing from methodologies pertaining to latent semantic analysis (LSA) and part-of-speech tagging, the ROUGE-AR metric reweights the final ROUGE output by incorporating both anaphor resolution and other intrinsic methods that are largely absent from non-human text summary evaluation.
Tactical cooperative planning for autonomous highway driving using Monte-Carlo Tree Search
Human drivers use nonverbal communication and anticipation of other drivers' actions to master conflicts occurring in everyday driving situations. Without a high penetration of vehicle-to-vehicle communication an autonomous vehicle has to have the possibility to understand intentions of others and share own intentions with the surrounding traffic participants. This paper proposes a cooperative combinatorial motion planning algorithm without the need for inter vehicle communication based on Monte Carlo Tree Search (MCTS). We motivate why MCTS is particularly suited for the autonomous driving domain. Furthermore, adoptions to the MCTS algorithm are presented as for example simultaneous decisions, the usage of the Intelligent Driver Model as microscopic traffic simulation, and a cooperative cost function. We further show simulation results of merging scenarios in highway-like situations to underline the cooperative nature of the approach.
Progression of chronic renal failure in a historical group of patients with nephropathic cystinosis
In a historical group of 205 patients with infantile or adolescent cystinosis treated without cysteamine, the rate of deterioration of renal function was analysed retrospectively. Patient survival curves and renal survival data are presented. Longitudinal data of serum creatinine values (n=3280) in 157 patients were plotted for each patient, smoothed by the method of the running medians and grouped into 12 serum creatinine classes. In every patient the age at the last smoothed serum creatinine value observed in each serum creatinine class was determined. These virtual age values were then summarized per serum creatinine class, expressed as median and centiles and plotted, thus describing the “natural” course of the disease. In 9 pairs of affected siblings the rate of progression showed a median difference of about 12 months. Our data describe the “natural” course of nephropathic cystinosis and can be used as a prognostic aid for recently detected patients. The data can also be applied for the assessment of the influence of new therapeutic strategies on the rate of progression of renal failure in cystinotic patients.
Gas-Phase Reactions of Copper Oxide Cluster Cations with Ammonia: Selective Catalytic Oxidation to Nitrogen and Water Molecules.
Reactions of copper oxide cluster cations, Cu nO m+ ( n = 3-7; m ≤ 5), with ammonia, NH3, are studied at near thermal energies using a guided ion beam tandem mass spectrometer. The single-collision reactions of specific clusters such as Cu4O2+, Cu5O3+, Cu6O3+, Cu7O3+, and Cu7O4+ give rise to the release of H2O after NH3 adsorption efficiently and result in the formation of Cu nO m-1NH+. These Cu nO m+ clusters commonly have Cu average oxidation numbers of 1.0-1.4. On the other hand, the formation of Cu nO m-1H2+, i.e., the release of HNO, is dominantly observed for Cu7O5+ with a higher Cu oxidation number. Density functional theory calculations are performed for the reaction Cu5O3+ + NH3 → Cu5O2NH+ + H2O as a typical example of H2O release. The calculations show that this reaction occurs almost thermoneutrally, consistent with the experimental observation. Further, our experimental studies indicate that the multiple-collision reactions of Cu5O3+ and Cu7O4+ with NH3 lead to the production of Cu5+ and Cu7O+, respectively. This suggests that the desirable NH3 oxidation to N2 and H2O proceeds on these clusters.
Prediction of human drug-drug interactions from time-dependent inactivation of CYP3A4 in primary hepatocytes using a population-based simulator.
Time-dependent inactivation (TDI) of human cytochromes P450 3A4 (CYP3A4) is a major cause of clinical drug-drug interactions (DDIs). Human liver microsomes (HLM) are commonly used as an enzyme source for evaluating the inhibition of CYP3A4 by new chemical entities. The inhibition data can then be extrapolated to assess the risk of human DDIs. Using this approach, under- and overpredictions of in vivo DDIs have been observed. In the present study, human hepatocytes were used as an alternative to HLM. Hepatocytes incorporate the effects of other mechanisms of drug metabolism and disposition (i.e., phase II enzymes and transporters) that may modulate the effects of TDI on clinical DDIs. The in vitro potency (K(I) and k(inact)) of five known CYP3A4 TDI drugs (clarithromycin, diltiazem, erythromycin, verapamil, and troleandomycin) was determined in HLM (pooled, n = 20) and hepatocytes from two donors (D1 and D2), and the results were extrapolated to predict in vivo DDIs using a Simcyp population trial-based simulator. Compared with observed DDIs, the predictions derived from HLM appeared to be overestimated. The predictions based on TDI measured in hepatocytes were better correlated with the DDIs (n = 37) observed in vivo (R(2) = 0.601 for D1 and 0.740 for D2) than those from HLM (R(2) = 0.451). In addition, with the use of hepatocytes a greater proportion of the predictions were within a 2-fold range of the clinical DDIs compared with using HLM. These results suggest that DDI predictions from CYP3A4 TDI kinetics in hepatocytes could provide an alternative approach to balance HLM-based predictions that can sometimes substantially overestimate DDIs and possibly lead to erroneous conclusions about clinical risks.
24-Month Data from the BRAVISSIMO: A Large-Scale Prospective Registry on Iliac Stenting for TASC A & B and TASC C & D Lesions.
BACKGROUND To evaluate the 24-month outcome of stenting in Trans-Atlantic Inter-Society Consensus (TASC) A & B and TASC C & D iliac lesions in a controlled setting. METHODS The BRAVISSIMO study is a prospective, nonrandomized, multicenter, multinational, monitored registry including 325 patients with aortoiliac lesions. The end point is the primary patency at 24 months, defined as a target lesion without a hemodynamically significant stenosis on duplex ultrasound (>50%, systolic velocity ratio >2.0). A separate analysis for TASC A & B versus TASC C & D population is performed. RESULTS Between July 2009 and September 2010, 190 patients with TASC A or B and 135 patients with TASC C or D aortoiliac lesions were included. The demographic data were comparable for TASC A & B cohort and TASC C & D cohort. Technical success was 100%. Significantly more balloon-expandable stents were deployed in TASC A & B lesions, and considerably more self-expanding stents were placed in TASC C & D (P = 0.01). The 24-month primary patency rate after 24 months for the total population was 87.9% (88.0% for TASC A, 88.5% for TASC B, 91.9% for TASC C, and 84.8% for TASC D). No statistically significant difference was shown when comparing these groups. The 24-month primary patency rates were 92.1% for patients treated with the self-expanding stent, 85.2% for patients treated with the balloon-expandable stent, and 75.3% for patients treated with a combination of both stents (P = 0.06). Univariate and multivariable regression analyses using Cox proportional hazards model identified only kissing stent configuration (P = 0.0012) and obesity (P = 0.0109) as independent predictors of restenosis (primary patency failure). Interestingly, as all TASC groups enjoyed high levels of patency, neither TASC category nor lesion length was predictive of restenosis. CONCLUSION The 24-month data from this large, prospective, multicenter study confirm that endovascular therapy may be considered the preferred first-line treatment option of iliac lesions, irrespectively of TASC lesion category.
Audio Sample Rate Conversion in FPGAs An efficient implementation of audio algorithms in programmable logic .
Today, even low-cost FPGAs provide far more computing power than DSPs. Current FPGAs have dedicated multipliers and even DSP multiply/accumulate (MAC) blocks that enable signals to be processed with clock speeds in excess of 550 MHz. Until now, however, these capabilities were rarely needed in audio signal processing. A serial implementation of an audio algorithm working in the kilohertz range uses exactly the same resources required for processing signals in the three-digit megahertz range. Consequently, programmable logic components such as PLDs or FPGAs are rarely used for processing low-frequency signals. After all, the parallel processing of mathematical operations in hardware is of no benefit when compared to an implementation based on classical DSPs; the sampling rates are so low that most serial DSP implementations are more than adequate. In fact, audio applications are characterized by such a high number of multiplications that they previously could
Physiological and pathological roles for microRNAs in the immune system
Mammalian microRNAs (miRNAs) have recently been identified as important regulators of gene expression, and they function by repressing specific target genes at the post-transcriptional level. Now, studies of miRNAs are resolving some unsolved issues in immunology. Recent studies have shown that miRNAs have unique expression profiles in cells of the innate and adaptive immune systems and have pivotal roles in the regulation of both cell development and function. Furthermore, when miRNAs are aberrantly expressed they can contribute to pathological conditions involving the immune system, such as cancer and autoimmunity; they have also been shown to be useful as diagnostic and prognostic indicators of disease type and severity. This Review discusses recent advances in our understanding of both the intended functions of miRNAs in managing immune cell biology and their pathological roles when their expression is dysregulated.
Why Are Autism Spectrum Conditions More Prevalent in Males?
Autism Spectrum Conditions (ASC) are much more common in males, a bias that may offer clues to the etiology of this condition. Although the cause of this bias remains a mystery, we argue that it occurs because ASC is an extreme manifestation of the male brain. The extreme male brain (EMB) theory, first proposed in 1997, is an extension of the Empathizing-Systemizing (E-S) theory of typical sex differences that proposes that females on average have a stronger drive to empathize while males on average have a stronger drive to systemize. In this first major update since 2005, we describe some of the evidence relating to the EMB theory of ASC and consider how typical sex differences in brain structure may be relevant to ASC. One possible biological mechanism to account for the male bias is the effect of fetal testosterone (fT). We also consider alternative biological theories, the X and Y chromosome theories, and the reduced autosomal penetrance theory. None of these theories has yet been fully confirmed or refuted, though the weight of evidence in favor of the fT theory is growing from converging sources (longitudinal amniocentesis studies from pregnancy to age 10 years old, current hormone studies, and genetic association studies of SNPs in the sex steroid pathways). Ultimately, as these theories are not mutually exclusive and ASC is multi-factorial, they may help explain the male prevalence of ASC.
3D skeleton based action recognition by video-domain translation-scale invariant mapping and multi-scale dilated CNN
In this paper, we present an image classification approach to action recognition with 3D skeleton videos. First, we propose a video domain translation-scale invariant image mapping, which transforms the 3D skeleton videos to color images, namely skeleton images. Second, a multi-scale dilated convolutional neural network (CNN) is designed for the classification of the skeleton images. Our multi-scale dilated CNN model could effectively improve the frequency adaptiveness and exploit the discriminative temporal-spatial cues for the skeleton images. Even though the skeleton images are very different from natural images, we show that the fine-tuning strategy still works well. Furthermore, we propose different kinds of data augmentation strategies to improve the generalization and robustness of our method. Experimental results on popular benchmark datasets such as NTU RGB + D, UTD-MHAD, MSRC-12 and G3D demonstrate the superiority of our approach, which outperforms the state-of-the-art methods by a large margin.
Once-per-step control of ankle-foot prosthesis push-off work reduces effort associated with balance during walking
Individuals with below-knee amputation have more difficulty balancing during walking, yet few studies have explored balance enhancement through active prosthesis control. We previously used a dynamical model to show that prosthetic ankle push-off work affects both sagittal and frontal plane dynamics, and that appropriate step-by-step control of push-off work can improve stability. We hypothesized that this approach could be applied to a robotic prosthesis to partially fulfill the active balance requirements of human walking, thereby reducing balance-related activity and associated effort for the person using the device. We conducted experiments on human participants (N = 10) with simulated amputation. Prosthetic ankle push-off work was varied on each step in ways expected to either stabilize, destabilize or have no effect on balance. Average ankle push-off work, known to affect effort, was kept constant across conditions. Stabilizing controllers commanded more push-off work on steps when the mediolateral velocity of the center of mass was lower than usual at the moment of contralateral heel strike. Destabilizing controllers enforced the opposite relationship, while a neutral controller maintained constant push-off work regardless of body state. A random disturbance to landing foot angle and a cognitive distraction task were applied, further challenging participants’ balance. We measured metabolic rate, foot placement kinematics, center of pressure kinematics, distraction task performance, and user preference in each condition. We expected the stabilizing controller to reduce active control of balance and balance-related effort for the user, improving user preference. The best stabilizing controller lowered metabolic rate by 5.5% (p = 0.003) and 8.5% (p = 0.02), and step width variability by 10.0% (p = 0.009) and 10.7% (p = 0.03) compared to conditions with no control and destabilizing control, respectively. Participants tended to prefer stabilizing controllers. These effects were not due to differences in average push-off work, which was unchanged across conditions, or to average gait mechanics, which were also unchanged. Instead, benefits were derived from step-by-step adjustments to prosthesis behavior in response to variations in mediolateral velocity at heel strike. Once-per-step control of prosthetic ankle push-off work can reduce both active control of foot placement and balance-related metabolic energy use during walking.
Comparison Between Analog Radio-Over-Fiber and Sigma Delta Modulated Radio-Over-Fiber
With the continuously increasing demand of cost effective, broadband wireless access, radio-over-fiber (RoF) starts to gain more and more momentum. Various techniques already exist, using analog (ARoF) or digitized (DRoF) radio signals over fiber; each with their own advantages and disadvantages. By transmitting a sigma delta modulated signal over fiber (SDoF), a similar immunity to impairments as DRoF can be obtained while maintaining the low complexity of ARoF. This letter describes a detailed experimental comparison between ARoF and SDoF that quantifies the improvement in linearity and error vector magnitude (EVM) of SDoF over ARoF. The experiments were carried out using a 16-QAM constellation with a baudrate from 20 to 125 MBd modulated on a central carrier frequency of 1 GHz. The sigma delta modulator runs at 8 or 13.5 Gbps. A high-speed vertical-cavity surface-emitting laser (VCSEL) operating at 850 nm is used to transmit the signal over 200-m multimode fiber. The receiver amplifies the electrical signals and subsequently filters to recover the original RF signal. Compared with ARoF, improvements exceeding 40 dB were measured on the third order intermodulation products when SDoF was employed, the EVM improves between 2.4 and 7.1 dB.
Use of lipid-modulating drugs in complicated course of coronary heart disease
We present the results of the study of lipid-modulating drugs (pravastatin, atorvastatin, simvastatin, and fibrate gemfibrozil) in complicated coronary heart disease (acute coronary syndrome without ST elevation, chronic heart failure. In acute coronary syndrome statins produced a positive effect on some studied parameters, while in heart failure only the safety of short-term therapy with statins was demonstrated.
Granular solids , liquids , and gases
Victor Hugo suggested the possibility that patterns created by the movement of grains of sand are in no small part responsible for the shape and feel of the natural world in which we live. No one can seriously doubt that granular materials, of which sand is but one example, are ubiquitous in our daily lives. They play an important role in many of our industries, such as mining, agriculture, and construction. They clearly are also important for geological processes where landslides, erosion, and, on a related but much larger scale, plate tectonics determine much of the morphology of Earth. Practically everything that we eat started out in a granular form, and all the clutter on our desks is often so close to the angle of repose that a chance perturbation will create an avalanche onto the floor. Moreover, Hugo hinted at the extreme sensitivity of the macroscopic world to the precise motion or packing of the individual grains. We may nevertheless think that he has overstepped the bounds of common sense when he related the creation of worlds to the movement of simple grains of sand. By the end of this article, we hope to have shown such an enormous richness and complexity to granular motion that Hugo’s metaphor might no longer appear farfetched and could have a literal meaning: what happens to a pile of sand on a table top is relevant to processes taking place on an astrophysical scale. Granular materials are simple: they are large conglomerations of discrete macroscopic particles. If they are noncohesive, then the forces between them are only repulsive so that the shape of the material is determined by external boundaries and gravity. If the grains are dry, any interstitial fluid, such as air, can often be neglected in determining many, but not all, of the flow and static properties of the system. Yet despite this seeming simplicity, a granular material behaves differently from any of the other familiar forms of matter—solids, liquids, or gases—and should therefore be considered an additional state of matter in its own right. In this article, we shall examine in turn the unusual behavior that granular material displays when it is considered to be a solid, liquid, or gas. For example, a sand pile at rest with a slope lower than the angle of repose, as in Fig. 1(a), behaves like a solid: the material remains at rest even though gravitational forces create macroscopic stresses on its surface. If the pile is tilted several degrees above the angle of repose, grains start to flow, as seen in Fig. 1(b). However, this flow is clearly not that of an ordinary fluid because it only exists in a boundary layer at the pile’s surface with no movement in the bulk at all. (Slurries, where grains are mixed with a liquid, have a phenomenology equally complex as the dry powders we shall describe in this article.) There are two particularly important aspects that contribute to the unique properties of granular materials: ordinary temperature plays no role, and the interactions between grains are dissipative because of static friction and the inelasticity of collisions. We might at first be tempted to view any granular flow as that of a dense gas since gases, too, consist of discrete particles with negligible cohesive forces between them. In contrast to ordinary gases, however, the energy scale kBT is insignificant here. The relevant energy scale is the potential energy mgd of a grain of mass m raised by its own diameter d in the Earth’s gravity g . For typical sand, this energy is at least 1012 times kBT at room temperature. Because kBT is irrelevant, ordinary thermodynamic arguments become useless. For example, many studies have shown (Williams, 1976; Rosato et al., 1987; Fan et al., 1990; Jullien et al., 1992; Duran et al., 1993; Knight et al., 1993; Savage, 1993; Zik et al., 1994; Hill and Kakalios, 1994; Metcalfe et al., 1995) that vibrations or rotations of a granular material will induce particles of different sizes to separate into different regions of the container. Since there are no attractive forces between
Moving beyond poverty: neighborhood structure, social processes, and health.
We investigate the impact of neighborhood structural characteristics, social organization, and culture on self-rated health in a large, cross-sectional sample of urban adults. Findings indicate that neighborhood affluence is a more powerful predictor of health status than poverty, above and beyond individual demographic background, socioeconomic status, health behaviors, and insurance coverage. Moreover, neighborhood affluence and residential stability interact in their association with health. When the prevalence of affluence is low, residential stability is negatively associated with health. Neighborhood affluence also accounts for a substantial proportion of the racial gap in health status. Finally, collective efficacy is a significant positive predictor of health but does not mediate the effects of structural factors.
Security Without Identification: Transaction Systems to Make Big Brother Obsolete
The large-scale automated transaction systems of the near future can be designed to protect the privacy and maintain the security of both individuals and organizations.
Sustainability: climate change
The effects of climate change caused by past and present emissions will impact the way we work and live in Scotland over the coming decades.
Neural Machine Translation with Gumbel-Greedy Decoding
Previous neural machine translation models used some heuristic search algorithms (e.g., beam search) in order to avoid solving the maximum a posteriori problem over translation sentences at test phase. In this paper, we propose the GumbelGreedy Decoding which trains a generative network to predict translation under a trained model. We solve such a problem using the Gumbel-Softmax reparameterization, which makes our generative network differentiable and trainable through standard stochastic gradient methods. We empirically demonstrate that our proposed model is effective for generating sequences of discrete words.
Deep Network Guided Proof Search
Deep learning techniques lie at the heart of several significant AI advances in recent years including object recognition and detection, image captioning, machine translation, speech recognition and synthesis, and playing the game of Go. Automated first-order theorem provers can aid in the formalization and verification of mathematical theorems and play a crucial role in program analysis, theory reasoning, security, interpolation, and system verification. Here we suggest deep learning based guidance in the proof search of the theorem prover E. We train and compare several deep neural network models on the traces of existing ATP proofs of Mizar statements and use them to select processed clauses during proof search. We give experimental evidence that with a hybrid, two-phase approach, deep learning based guidance can significantly reduce the average number of proof search steps while increasing the number of theorems proved. Using a few proof guidance strategies that leverage deep neural networks, we have found first-order proofs of 7.36% of the first-order logic translations of the Mizar Mathematical Library theorems that did not previously have ATP generated proofs. This increases the ratio of statements in the corpus with ATP generated proofs from 56% to 59%.
Public Journalism Challenges to Curriculum and Instruction.
Advocates of public journalism widely agree that this conception of journalism differs markedly from conventional journalism. Not only do the underlying goals of public journalism differ from conventional journalism, but the way it is practiced contrasts as well. If this is indeed the case, then one would expect public journalism scholars to be debating, among other issues, the type of formal education and training required for students to become competent public journalists. Surprisingly, this has not been the case. Only a few books and articles have been published. To date, no sustained debate on the important topic of public journalism and journalism education has occurred. This is all the more surprising and unfortunate considering that courses in public journalism are increasingly being offered at colleges and universities across the United States (Gibbs, 1997; Whitehouse and Clapp, 2000). Theory and practice Widely associated with the theoretical work of New York University Professor Jay Rosen and the writings of former Wichita (Kansas) Eagle Editor Davis Merritt, the emergence of public journalism in the late 1980s and early 1990s may perhaps best be explained as a reaction to perceived flaws in the practice of conventional journalism (Merritt, 1994, 1995a, 1995b, 1996, 1998; Rosen & Merritt, 1994; Rosen, 1991, 1993, 1994, 1996, 1997, 1998, 1999a, 1999b; Rosen & Merritt, 1998). Central to a public conception of journalism is the argument that the primary political responsibility of journalists is to help increase civic commitment to, and citizen participation in, democratic processes. In Rosen's (1993) words, to be "public" in their orientation, journalists must "play an active role in supporting civic involvement, improving discourse and debate, and creating a climate in which the affairs of the community" can be aired and deliberated (p. 3). This requires, in turn, that journalists abandon their current preoccupation with "government as the actor to which [they] need to be attentive [and] people as the acted on, who [they] might occasionally ask to comment but who otherwise have no role" to play (Merritt, 1998, p. 77). Journalists should, as Rosen (1994, p. 376) argues, "focus on citizens as actors within rather than spectators to" democratic processes (Carey, 1987) by helping them articulate what has been referred to variously as the "citizen's agenda," the "public agenda," and the "people's agenda." According to Rosen's corpus of theory explanations, Lambeth, Meyer, & Thorson's (1998) Assessing Public Journalism anthology (Rosen, 1998, p. 46; 1999a, 1999b), public journalism consists of three dimensions simultaneously. Public journalism is: (a) "an argument about the proper task of the press," which is a topic that has been covered widely in the scholarly literature (Glasser, 1999a; Haas, 1999; Rosen, 1999a), (b) "a set of practices - experiments ... that are slowly spreading through American journalism," which are topics that have gained proportionally little discussion and which I seek to address in this paper, and (c) "a movement of people and institutions," supported by various organizations, notably the American Press Institute, the Kettering Foundation, the Knight Foundation, the Pew Center for Civic Journalism, the Poynter Institute for Media Studies, and the Project on Public Life and the Press. Public journalism can also be defined by example. Since 1988, when the first public journalism campaign - as such - was launched by the Columbus (Ga.) Ledger-Enquirer (Rosen, 1991), more than 300 public journalism campaigns have been conducted across the United States (Austin, 1997). While these campaigns have included work done across news media -- newspapers, television, radio, and the Internet, either separately or collaboratively (Denton & Thorson, 1998; Thorson & Lambeth, 1995; Thorson, Ognianova, Coyle, & Lambeth, 1998) - the majority of campaigns have been confined to small and medium-sized newspapers (Merritt & Rosen, 1998). …
A generic camera calibration method for fish-eye lenses
Fish-eye lenses are convenient in such computer vision applications where a very wide angle of view is needed. However, their use for measurement purposes is limited by the lack of an accurate, generic, and easy-to-use calibration procedure. We hence propose a generic camera model for cameras equipped with fish-eye lenses and a method for calibration of such cameras. The calibration is possible by using only one view of a planar calibration object but more views should be used for better results. The proposed calibration method was evaluated with real images and the obtained results are promising. The calibration software becomes commonly available at the author's Web page.
Grover ’ s quantum searching algorithm is optimal
I show that for any number of oracle lookups up to about π/4 √ N , Grover’s quantum searching algorithm gives the maximal possible probability of finding the desired element. I explain why this is also true for quantum algorithms which use measurements during the computation. I also show that unfortunately quantum searching cannot be parallelized better than by assigning different parts of the search space to independent quantum computers. 1 Quantum searching Imagine we have N cases of which only one fulfills our conditions. E.g. we have a function which gives 1 only for one out of N possible input values and gives 0 otherwise. Often an analysis of the algorithm for calculating the function will allow us to find quickly the input value for which the output is 1. Here we consider the case where we do not know better than to repeatedly calculate the function without looking at the algorithm, e.g. because the function is calculated in a black box subroutine into which we are not allowed to look. In computer science this is called an oracle. Here I consider only oracles which give 1 for exactly one input. Quantum searching for the case with several inputs which give 1 and even with an unknown number of such inputs is treated in [4]. Obviously on a classical computer we have to query the oracle on average N/2 times before we find the answer. Grover [1] has given a quantum algorithm which can solve the problem in about π/4 √ N steps. Bennett et al. [3] have shown that asymptotically no quantum algorithm can solve the problem in less than a number of steps proportional to √ N . Boyer et al. [4] have improved this result to show that e.g. for a 50% success probability no quantum algorithm can do better than only a few percent faster than Grover’s algorithm. I improve ∗Supported by Schweizerischer Nationalfonds and LANL
Hierarchical Quantized Representations for Script Generation
Scripts define knowledge about how everyday scenarios (such as going to a restaurant) are expected to unfold. One of the challenges to learning scripts is the hierarchical nature of the knowledge. For example, a suspect arrested might plead innocent or guilty, and a very different track of events is then expected to happen. To capture this type of information, we propose an autoencoder model with a latent space defined by a hierarchy of categorical variables. We utilize a recently proposed vector quantization based approach, which allows continuous embeddings to be associated with each latent variable value. This permits the decoder to softly decide what portions of the latent hierarchy to condition on by attending over the value embeddings for a given setting. Our model effectively encodes and generates scripts, outperforming a recent language modeling-based method on several standard tasks, and allowing the autoencoder model to achieve substantially lower perplexity scores compared to the previous language modelingbased method.
RCN issues red alert to the nursing profession.
Professor Alan Glasper discusses the Royal College of Nursing's red-light warning to the profession following the publication of the long-awaited Francis report
Modelling mixed-integer optimisation problems in constraint logic programming
Constraint logic programming (CLP) has become a promising new technology for solving complex combinatorial problems. In this paper, we investigate how (constraint) logic programming can support the modelling part in solving mixed-integer optimisation problems. First we show that the basic functionality of algebraic modelling languages can be realised very easily in a pure logic programming system like Prolog and that, even without using constraints, various additional features are available. Then we focus on the constraint solving facilities o ered by CLP systems. In particular, we explain how the constraint solver of the constraint logic programming language CLP(PB) can be used in modelling 0-1 problems.
Location Privacy Violation via GPS-Agnostic Smart Phone Car Tracking
Smart phones nowadays are equipped with global positioning systems (GPS) chips to enable navigation and location-based services. A malicious app with the access to GPS data can easily track the person who carries the smart phone. People may disable the GPS module and turn it on only when necessary to protect their location privacy. However, in this paper, we demonstrate that an attacker is still able to track a person by using the embedded magnetometer sensor in victim's smart phone, even when the GPS module is disabled all the time. Moreover, this attack neither requests user permissions related to locations for installation, nor does its operation rely on wireless signals like WiFi positioning or suffer from signal propagation loss. Only the angles of a car's turning measured by the magnetometer sensor of a driver's smart phone are utilized. Without loss of generality, we focus on car tracking, since cars are popular transportation tools in developed countries, where smart phones are commonly used. Inspired by the intuition that a car may exhibit different turning angles at different road intersections, we find that an attacker can match car turning angles to a map to infer the actual path that the driver takes. We address technical challenges about car turn angle extraction, map database construction, and path matching algorithm design to make this attack practical and efficient. We also perform an evaluation using real-world driving paths to verify the relationship between the numbers of turns and the time cost of the matching algorithm. The results show that it is possible for attacker to precisely pinpoint the actual path when the driving path includes 11 turns or more. More simulations are performed to demonstrate the attack with lager selected local areas.
Coupling Interactions and Performance: Predicting Team Performance from Thin Slices of Conflict
Do teams show stable conflict interaction patterns that predict their performance hours, weeks, or even months in advance? Two studies demonstrate that two of the same patterns of emotional interaction dynamics that distinguish functional from dysfunctional marriages also distinguish high from low-performance design teams in the field, up to 6 months in advance, with up to 91% accuracy, and based on just 15minutes of interaction data: Group Affective Balance, the balance of positive to negative affect during an interaction, and Hostile Affect, the expression of a set of specific negative behaviors were both found as predictors of team performance. The research also contributes a novel method to obtain a representative sample of a team's conflict interaction. Implications for our understanding of design work in teams and for the design of groupware and feedback intervention systems are discussed.
Design semantics of connections in a smart home environment
• A submitted manuscript is the author's version of the article upon submission and before peer-review. There can be important differences between the submitted version and the official published version of record. People interested in the research are advised to contact the author for the final version of the publication, or visit the DOI to the publisher's website. • The final author version and the galley proof are versions of the publication after peer review. • The final published version features the final layout of the paper including the volume, issue and page numbers.
Does Bidirectional Traffic Do More Harm Than Good in LoRaWAN Based LPWA Networks?
The need for low power, long range and low cost connectivity to meet the requirements of IoT applications has led to the emergence of Low Power Wide Area (LPWA) networking technologies. The promise of these technologies to wirelessly connect massive numbers of geographically dispersed devices at a low cost continues to attract a great deal of attention in the academic and commercial communities. Several rollouts are already underway even though the performance of these technologies is yet to be fully understood. In light of these developments, tools to carry out 'what-if analyses' and predeployment studies are needed to understand the implications of choices that are made at design time. While there are several promising technologies in the LPWA space, this paper specifically focuses on the LoRa/LoRaWAN technology. In particular, we present LoRaWANSim, a simulator which extends the LoRaSim tool to add support for the LoRaWAN MAC protocol, which employs bidirectional communication. This is a salient feature not available in any other LoRa simulator. Subsequently, we provide vital insights into the performance of LoRaWAN based networks through extensive simulations. In particular, we show that the achievable network capacity reported in earlier studies is quite optimistic. The introduction of downlink traffic can have a significant impact on the uplink throughput. The number of transmit attempts recommended in the LoRaWAN specification may not always be the best choice. We also highlight the energy consumption versus reliability trade-offs associated with the choice of number of retransmission attempts.
A Neural Network Model for Low-Resource Universal Dependency Parsing
Accurate dependency parsing requires large treebanks, which are only available for a few languages. We propose a method that takes advantage of shared structure across languages to build a mature parser using less training data. We propose a model for learning a shared “universal” parser that operates over an interlingual continuous representation of language, along with language-specific mapping components. Compared with supervised learning, our methods give a consistent 8-10% improvement across several treebanks in low-resource simulations.
The effect of temperature and solution pH on the nucleation of tetragonal lysozyme crystals.
Part of the challenge of macromolecular crystal growth for structure determination is obtaining crystals with a volume suitable for x-ray analysis. In this respect an understanding of the effect of solution conditions on macromolecule nucleation rates is advantageous. This study investigated the effects of supersaturation, temperature, and pH on the nucleation rate of tetragonal lysozyme crystals. Batch crystallization plates were prepared at given solution concentrations and incubated at set temperatures over 1 week. The number of crystals per well with their size and axial ratios were recorded and correlated with solution conditions. Crystal numbers were found to increase with increasing supersaturation and temperature. The most significant variable, however, was pH; crystal numbers changed by two orders of magnitude over the pH range 4.0-5.2. Crystal size also varied with solution conditions, with the largest crystals obtained at pH 5.2. Having optimized the crystallization conditions, we prepared a batch of crystals under the same initial conditions, and 50 of these crystals were analyzed by x-ray diffraction techniques. The results indicate that even under the same crystallization conditions, a marked variation in crystal properties exists.
Evaluating Ontology-Mapping Tools: Requirements and Experience
The appearance of a large number of ontology tools may leave a user looking for an appropriate tool overwhelmed and uncertain on which tool to choose. Thus evaluation and comparison of these tools is important to help users determine which tool is best suited for their tasks. However, there is no “one size fits all” comparison framework for ontology tools: different classes of tools require very different comparison frameworks. For example, ontology-development tools can easily be compared to one another since they all serve the same task: define concepts, instances, and relations in a domain. Tools for ontology merging, mapping, and alignment however are so different from one another that direct comparison may not be possible. They differ in the type of input they require (e.g., instance data or no instance data), the type of output they produce (e.g., one merged ontology, pairs of related terms, articulation rules), modes of interaction and so on. This diversity makes comparing the performance of mapping tools to one another largely meaningless. We present criteria that partition the set of such tools in smaller groups allowing users to choose the set of tools that best fits their tasks. We discuss what resources we as a community need to develop in order to make performance comparisons within each group of merging and mapping tools useful and effective. These resources will most likely come as results of evaluation experiments of stand-alone tools. As an example of such an experiment, we discuss our experiences and results in evaluating PROMPT, an interactive ontology-merging tool. Our experiment produced some of the resources that we can use in more general evaluation. However, it has also shown that comparing the performance of different tools can be difficult since human experts do not agree on how ontologies should be merged, and we do not yet have a good enough metric for comparing ontologies. 1 Ontology-Mapping Tools Versus Ontology-Development Tools Consider two types of ontology tools: (1) tools for developing ontologies and (2) tools for mapping, aligning, or merging ontologies. By ontology-development tools (which we will call development tools in the paper) we mean ontology editors that allow users to define new concepts, relations, and instances. These tools usually have capabilities for importing and extending existing ontologies. Development tools may include graphical browsers, search capabilities, and constraint checking. Protégé-2000 [17], OntoEdit [19], OilEd [2], WebODE [1], and Ontolingua [7] are some examples of development tools. Tools for mapping, aligning, and merging ontologies (which we will call mapping tools) are the tools that help users find similarities and differences between source ontologies. Mapping tools either identify potential correspondences automatically or provide the environment for the users to find and define these correspondences, or both. Mapping tools are often extensions of development tools. Mapping tool and algorithm examples include PROMPT[16], ONION [13], Chimaera [11], FCA-Merge [18], GLUE [5], and OBSERVER [12]. Even though theories on how to evaluate either type of tools are not well articulated at this point, there are already several frameworks for evaluating ontologydevelopment tools. For example, Duineveld and colleagues [6] in their comparison experiment used different development tools to represent the same domain ontology. Members of the Ontology-environments SIG in the OntoWeb initiative designed an extensive set of criteria for evaluating ontology-development tools and applied these criteria to compare a number of projects. Some of the aspects that these frameworks compare include: – interoperability with other tools and the ability to import and export ontologies in different representation languages; – expressiveness of the knowledge model; – scalability and extensibility; – availability and capabilities of inference services; – usability of the tools. Let us turn to the second class of ontology tools: tools for mapping, aligning, or merging ontologies. It is tempting to reuse many of the criteria from evaluation of development tools. For example, expressiveness of the underlying language is important and so is scalability and extensibility. We need to know if a mapping tool can work with ontologies from different languages. However, if we look at the mapping tools more closely, we see that their comparison and evaluation must be very different from the comparison and evaluation of development tools. All the ontology-development tools have very similar inputs and the desired outputs: we have a domain, possibly a set of ontologies to reuse, and a set of requirements for the ontology, and we need to use a tool to produce an ontology of that domain satisfying the requirements. Unlike the ontology-development tools, the 1 http://delicias.dia.fi.upm.es/ontoweb/sig-tools/ ontology-mapping tools vary with respect to the precise task that they perform, the inputs on which they operate and the outputs that they produce. First, the tasks for which the mapping tools are designed, differ greatly. On the one hand, all the tools are designed to find similarities and differences between source ontologies in one way or another. In fact, researchers have suggested a uniform framework for describing and analyzing this information regardless of what the final task is [3, 10]. On the other hand, from the user’s point of view the tools differ greatly in what tasks this analysis of similarities and differences supports. For example, Chimaera and PROMPT allow users to merge source ontologies into a new ontology that includes concepts from both sources. The output of ONION is a set of articulation rules between two ontologies; these rules define what the similarities and differences are. The articulation rules can later be used for querying and other tasks. The task of GLUE, AnchorPROMPT [14] and FCA-Merge is to provide a set of pairs of related concepts with some certainty factor associated with each pair. Second, different mapping tools rely on different inputs: Some tools deal only with class hierarchies of the sources and are agnostic in their merging algorithms about slots or instances (e.g., Chimaera). Other tools use not only classes but also slots and value restrictions in their analysis (e.g., PROMPT). Other tools rely in their algorithms on the existence of instances in each of the source ontologies (e.g., GLUE). Yet another set of tools require not only that instances are present, but also that source ontologies share a set of instances (e.g., FCA-Merge). Some tools work independently and produce suggestions to the user at the end, allowing the user to analyze the suggestions (e.g., GLUE, FCAMerge). Some tools expect that the source ontologies follow a specific knowledgerepresentation paradigm (e.g., Description Logic for OBSERVER). Other tools rely heavily on interaction with the user and base their analysis not only on the source ontologies themselves but also on the merging or alignment steps that the user performs (e.g., PROMPT, Chimaera). Third, since the tasks that the mapping tools support differ greatly, the interaction between a user and a tool is very different from one tool to another. Some tools provide a graphical interface which allows users to compare the source ontologies visually, and accept or reject the results of the tool analysis (e.g., PROMPT, Chimaera, ONION), the goal of other tools is to run the algorithms which find correlations between the source ontologies and output the results to the user in a text file or on the terminal–the users must then use the results outside the tool itself. The goal of this paper is to start a discussion on a framework for evaluating ontology-mapping tools that would account for this great variety in underlying assumptions and requirements. We argue that many of the tools cannot be compared directly with one another because they are so different in the tasks that they support. We identify the criteria for determining the groups of tools that can be compared directly, define what resources we need to develop to make such comparison possible and discuss our experiences in evaluating our merging tool, PROMPT, as well as the results of this evaluation. 2 Requirements for Evaluating Mapping Tools Before we discuss the evaluation requirements for mapping tools, we must answer the following question which will certainly affect the requirements: what is the goal of such potential evaluation? It is tempting to say “find the best tool.” However, as we have just discussed, given the diversity in the tasks that the tools support, their modes of interaction, the input data they rely on, it is impossible to compare the tools to one another and to find one or even several measures to identify the “best” tool. Therefore, we suggest that the questions driving such evaluation must be user-oriented. A user may ask either what is the best tool for his task or whether a particular tool is good enough for his task. Depending on what the user’s source ontologies are, how much manual work he is willing to put in, how important the precision of the results is, one or another tool will be more appropriate. Therefore, the first set of evaluation criteria are pragmatic criteria. These criteria include but are not limited to the following: Input requirements What elements from the source ontologies does the tool use? Which of these elements does the tool require? This information may include: concept names, class hierarchy, slot definitions, facet values, slot values, instances. Does the tool require that source ontologies use a particular knowledge-representation paradigm? Level of user interaction Does the tool perform the comparison in a “batch mode,” presenting the results at the end, or is it an interactive tool where intermediate results are analyzed by the user, and the tool uses the feedback for further analysis? Type o
Bathing in reeking wounds: The liberal arts, beauty, and war
A historic dialectic exists between the beautiful and the bestial. The bestial destroys the beautiful, but in a bloody miracle, the beautiful emerges from the womb of the bestial, the ‘terrible beauty’ of which the poet W. B. Yeats wrote. The liberal arts, so often thought to dwell in a remote ivory tower, embody this dialectic. Wars and disasters have spurred their evolution. Even more important, the liberal arts are at once the dialectic's most energetic and sensitive explorers. Shakespeare’s gory tragedy about war and warriors, Macbeth, is a springboard for such explorations, dramatizing a dialectic between war and love, destruction and redemption, savagery and poetry. We bathe in reeking wounds. Because of their diversity, liberal artisans, practitioners of the liberal arts, are now uniquely prepared to engage with this dialectic. They can also inoculate us against the diseases of the allure of war, blood lust, and propaganda.
Efficient algorithms for supersingular isogeny Diffie-Hellman
We propose a new suite of algorithms that significantly improve the performance of supersingular isogeny Diffie-Hellman (SIDH) key exchange. Subsequently, we present a full-fledged implementation of SIDH that is geared towards the 128-bit quantum and 192-bit classical security levels. Our library is the first constant-time SIDH implementation and is more than 2.5 times faster than the previous best (non-constant-time) SIDH software. The high speeds in this paper are driven by compact, inversion-free point and isogeny arithmetic and fast SIDH-tailored field arithmetic: on an Intel Haswell processor, generating ephemeral public keys takes 51 million cycles for Alice and 59 million cycles for Bob while computing the shared secret takes 47 million and 57 million cycles, respectively. The size of public keys is only 751 bytes, which is significantly smaller than most of the popular post-quantum key exchange alternatives. Ultimately, the size and speed of our software illustrates the strong potential of SIDH as a post-quantum key exchange candidate and we hope that these results encourage a wider cryptanalytic effort.
ML-CNN: A novel deep learning based disease named entity recognition architecture
In this paper, we present a deep learning based disease named entity recognition architecture. First, the word-level embedding, character-level embedding and lexicon feature embedding are concatenated as input. Then multiple convolutional layers are stacked over the input to extract useful features automatically. Finally, multiple label strategy, which is firstly introduced, is applied to the output layer to capture the correlation information between neighboring labels. Experimental results on both NCBI and CDR corpora show that ML-CNN can achieve the state-of-the-art performance.
A Unifying Framework for Version Control in a CAD Environment
Version control is one of the most important lunctions which need to be supported in integrated computer-aided design (CAD) systems. In this paper we address a broad spectrum of semantic and operational issues in version control for a public/private distributed architecture of CAD syslems. The research issues we address include the semantics of version creation and manipulation, version na.ming and name binding, a.nd version change notification. We develop solutions to these issues under a unifying framework, and discuss implementation and application interface issues.
Coexistence of sleep and feeding disturbances in young children.
OBJECTIVE Behavioral insomnia and feeding difficulties are 2 prevalent conditions in healthy young children. Despite similarities in nature, etiology, prevalence, and age distribution, the association between these 2 common disorders in young children has not been examined thus far. PATIENTS AND METHODS Children aged 6 to 36 months with either behavioral insomnia or feeding disorders were recruited. Children aged 6 to 36 months who attended the well-care clinics were recruited and served as controls. Sleep and feeding were evaluated by using a parental questionnaire. RESULTS Six hundred eighty-one children were recruited. Fifty-eight had behavioral insomnia, 76 had feeding disorders, and 547 were controls. The mean age was 17.0 ± 7.6 months. Parents of children with feeding disorders considered their child's sleep problematic significantly more frequently compared with controls (37% vs 16%, P = .0001 [effect size (ES): 0.66]). They reported shorter nocturnal sleep duration and delayed sleep time compared with controls (536 ± 87 vs 578 ± 88 minutes, P = .0001) and 9:13 ± 0.55 PM vs 8:26 ± 1.31 PM, P = .003). Parents of children with behavioral insomnia described their child's feeding as "a problem" more frequently compared with controls (26% vs 9%, P = .001 [ES: 0.69]). They reported being more concerned about their child's growth (2.85 ± 1.1 vs 2.5 ± 1.0, P = .03) and reported higher scores of food refusal compared with controls (3.38 ± 0.54 vs 3.23 ± 0.44, P = .04). CONCLUSIONS Problematic sleep and feeding behaviors tend to coexist in early childhood. Increased awareness of clinicians to this coexistence may allow early intervention and improve outcome.
Dichloroacetate enhances performance and reduces blood lactate during maximal cycle exercise in chronic obstructive pulmonary disease.
RATIONALE Impaired skeletal muscle function contributes to exercise limitation in patients with chronic obstructive pulmonary disease (COPD). This is characterized by reduced mitochondrial adenosine triphosphate generation, and greater reliance on nonmitochondrial energy production. Dichloroacetate (DCA) infusion activates muscle pyruvate dehydrogenase complex (PDC) at rest, reducing inertia in mitochondrial energy delivery at the onset of exercise and diminishing anaerobic energy production. OBJECTIVES This study aimed to determine whether DCA infusion enhanced mitochondrial energy delivery during symptom-limited maximal exercise, thereby reducing exercise-induced lactate and ammonia accumulation and, consequently, improving exercise performance in patients with COPD. METHODS A randomized, double-blind crossover design was used. Eighteen subjects with COPD performed maximal cycle exercise after an intravenous infusion of DCA (50 mg/kg body mass) or saline (control). Exercise work output was determined, and blood lactate and ammonia concentrations were measured at rest, 1 and 2 minutes of exercise, peak exercise, and 2 minutes postexercise. MEASUREMENTS AND MAIN RESULTS DCA infusion reduced peak blood lactate concentration by 20% (mean [SE]; difference, 0.48 [0.11] mmol/L, P < 0.001) and peak blood ammonia concentration by 15% (mean [SE]; difference, 14.2 [2.9] mumol/L, P < 0.001] compared with control. After DCA, peak exercise workload improved significantly by a mean (SE) of 8 (1) W (P < 0.001) and peak oxygen consumption by 1.2 (0.5) ml/kg/minute (P = 0.03) compared with control. CONCLUSIONS We have shown that a pharmacologic intervention known to activate muscle PDC can reduce blood lactate and ammonia accumulation during exercise and improve maximal exercise performance in subjects with COPD. Skeletal muscle PDC activation may be a target for pharmacologic intervention in the management of exercise intolerance in COPD.
Broadband Transition Design From Microstrip to CPW
This letter presents a broadband transition between microstrip and CPW located at the opposite lawyer of the substrate. Basically, the transition is based on two couples of microstrip-to-slotline transitions. In order to widen bandwidth of the transition, a short-ended parallel microstrip stub is added. A demonstrator transition has been designed, fabricated and measured. Results show that a frequency range of 2.05 to 9.96 GHz (referred to return loss of 10 dB) is obtained.
Comparing gold nano-particle enhanced radiotherapy with protons, megavoltage photons and kilovoltage photons: a Monte Carlo simulation.
Gold nanoparticles (GNPs) have shown potential to be used as a radiosensitizer for radiation therapy. Despite extensive research activity to study GNP radiosensitization using photon beams, only a few studies have been carried out using proton beams. In this work Monte Carlo simulations were used to assess the dose enhancement of GNPs for proton therapy. The enhancement effect was compared between a clinical proton spectrum, a clinical 6 MV photon spectrum, and a kilovoltage photon source similar to those used in many radiobiology lab settings. We showed that the mechanism by which GNPs can lead to dose enhancements in radiation therapy differs when comparing photon and proton radiation. The GNP dose enhancement using protons can be up to 14 and is independent of proton energy, while the dose enhancement is highly dependent on the photon energy used. For the same amount of energy absorbed in the GNP, interactions with protons, kVp photons and MV photons produce similar doses within several nanometers of the GNP surface, and differences are below 15% for the first 10 nm. However, secondary electrons produced by kilovoltage photons have the longest range in water as compared to protons and MV photons, e.g. they cause a dose enhancement 20 times higher than the one caused by protons 10 μm away from the GNP surface. We conclude that GNPs have the potential to enhance radiation therapy depending on the type of radiation source. Proton therapy can be enhanced significantly only if the GNPs are in close proximity to the biological target.
Frequency-domain Compressive Channel Estimation for Frequency-Selective Hybrid mmWave MIMO Systems
Channel estimation is useful in millimeter wave (mmWave) MIMO communication systems. Channel state information allows optimized designs of precoders and combiners under different metrics such as mutual information or signal-to-interference-noise (SINR) ratio. At mmWave, MIMO precoders and combiners are usually hybrid, since this architecture provides a means to trade-off power consumption and achievable rate. Channel estimation is challenging when using these architectures, however, since there is no direct access to the outputs of the different antenna elements in the array. The MIMO channel can only be observed through the analog combining network, which acts as a compression stage of the received signal. Most of prior work on channel estimation for hybrid architectures assumes a frequencyflat mmWave channel model. In this paper, we consider a frequency-selective mmWave channel and propose compressed-sensing-based strategies to estimate the channel in the frequency domain. We evaluate different algorithms and compute their complexity to expose trade-offs in complexity-overheadperformance as compared to those of previous approaches. This work was partially funded by the Agencia Estatal de Investigacin (Spain) and the European Regional Development Fund (ERDF) under project MYRADA (TEC2016-75103-C2-2-R), the U.S. Department of Transportation through the DataSupported Transportation Operations and Planning (D-STOP) Tier 1 University Transportation Center, by the Texas Department of Transportation under Project 0-6877 entitled Communications and Radar-Supported Transportation Operations and Planning (CAR-STOP) and by the National Science Foundation under Grant NSF-CCF-1319556 and NSF-CCF-1527079. ar X iv :1 70 4. 08 57 2v 1 [ cs .I T ] 2 7 A pr 2 01 7
An Efficient Algorithm for Calculating the Exact Hausdorff Distance
The Hausdorff distance (HD) between two point sets is a commonly used dissimilarity measure for comparing point sets and image segmentations. Especially when very large point sets are compared using the HD, for example when evaluating magnetic resonance volume segmentations, or when the underlying applications are based on time critical tasks, like motion detection, then the computational complexity of HD algorithms becomes an important issue. In this paper we propose a novel efficient algorithm for computing the exact Hausdorff distance. In a runtime analysis, the proposed algorithm is demonstrated to have nearly-linear complexity. Furthermore, it has efficient performance for large point set sizes as well as for large grid size; performs equally for sparse and dense point sets; and finally it is general without restrictions on the characteristics of the point set. The proposed algorithm is tested against the HD algorithm of the widely used national library of medicine insight segmentation and registration toolkit (ITK) using magnetic resonance volumes with extremely large size. The proposed algorithm outperforms the ITK HD algorithm both in speed and memory required. In an experiment using trajectories from a road network, the proposed algorithm significantly outperforms an HD algorithm based on R-Trees.
NeuralPower: Predict and Deploy Energy-Efficient Convolutional Neural Networks
How much energy is consumed for an inference made by a convolutional neural network (CNN)?” With the increased popularity of CNNs deployed on the wide-spectrum of platforms (from mobile devices to workstations), the answer to this question has drawn significant attention. From lengthening battery life of mobile devices to reducing the energy bill of a datacenter, it is important to understand the energy efficiency of CNNs during serving for making an inference, before actually training the model. In this work, we propose NeuralPower : a layer-wise predictive framework based on sparse polynomial regression, for predicting the serving energy consumption of a CNN deployed on any GPU platform. Given the architecture of a CNN, NeuralPower provides an accurate prediction and breakdown for power and runtime across all layers in the whole network, helping machine learners quickly identify the power, runtime, or energy bottlenecks. We also propose the “energy-precision ratio” (EPR) metric to guide machine learners in selecting an energy-efficient CNN architecture that better trades off the energy consumption and prediction accuracy. The experimental results show that the prediction accuracy of the proposed NeuralPower outperforms the best published model to date, yielding an improvement in accuracy of up to 68.5%. We also assess the accuracy of predictions at the network level, by predicting the runtime, power, and energy of state-of-the-art CNN architectures, achieving an average accuracy of 88.24% in runtime, 88.34% in power, and 97.21% in energy. We comprehensively corroborate the effectiveness of NeuralPower as a powerful framework for machine learners by testing it on different GPU platforms and Deep Learning software tools.
Kinematic assessment of paediatric forefoot varus.
Forefoot varus is a static deformity not easy to assess clinically. If left uncorrected, it is thought to affect both the posture of the patient and the kinematics of their lower limbs, and even the spine. Three-dimensional gait assessment could help to confirm forefoot varus diagnosis and provide objective evidence of the functional adaptive mechanisms postulated in the literature. The recently available Oxford Foot Model was used, simultaneously with a conventional lower limb model, to compare the kinematics of 10 forefoot varus children (aged 8-13) and 11 healthy controls (aged 7-13) during gait. Data acquisition was performed using a six-camera motion capture system, with a total of 27 reflective markers. A patient-by-patient comparison with the controls suggested several compensation patterns, although statistically significant differences were found only for the mean values of hip adduction/abduction during load response and midstance and hip flexion/extension during pre-swing. A multivariate statistical technique was used to determine which of the measured variables better separated both groups. The best discriminant model presented here includes hip adduction/abduction during load response, hindfoot/tibia inversion/eversion during pre-swing, hindfoot/tibia dorsiflexion/plantar flexion during load response and arch height during midstance, providing a rate of correct classification of 81%. The results could not fully confirm the kinematic relationships suggested in the literature. The small degree of forefoot varus deformity present in the patient group could have prevented other variables from becoming discriminant. A larger patient sample would help determine the possible different compensatory patterns to different degrees of forefoot varus.
Learning from streaming data with concept drift and imbalance: an overview
The primary focus of machine learning has traditionally been on learning from data assumed to be sufficient and representative of the underlying fixed, yet unknown, distribution. Such restrictions on the problem domain paved the way for development of elegant algorithms with theoretically provable performance guarantees. As is often the case, however, real-world problems rarely fit neatly into such restricted models. For instance class distributions are often skewed, resulting in the “class imbalance” problem. Data drawn from non-stationary distributions is also common in real-world applications, resulting in the “concept drift” or “non-stationary learning” problem which is often associated with streaming data scenarios. Recently, these problems have independently experienced increased research attention, however, the combined problem of addressing all of the above mentioned issues has enjoyed relatively little research. If the ultimate goal of intelligent machine learning algorithms is to be able to address a wide spectrum of real-world scenarios, then the need for a general framework for learning from, and adapting to, a non-stationary environment that may introduce imbalanced data can be hardly overstated. In this paper, we first present an overview of each of these challenging areas, followed by a comprehensive review of recent research for developing such a general framework.
Medial reward and lateral non-reward orbitofrontal cortex circuits change in opposite directions in depression.
The first brain-wide voxel-level resting state functional connectivity neuroimaging analysis of depression is reported, with 421 patients with major depressive disorder and 488 control subjects. Resting state functional connectivity between different voxels reflects correlations of activity between those voxels and is a fundamental tool in helping to understand the brain regions with altered connectivity and function in depression. One major circuit with altered functional connectivity involved the medial orbitofrontal cortex Brodmann area 13, which is implicated in reward, and which had reduced functional connectivity in depression with memory systems in the parahippocampal gyrus and medial temporal lobe, especially involving the perirhinal cortex Brodmann area 36 and entorhinal cortex Brodmann area 28. The Hamilton Depression Rating Scale scores were correlated with weakened functional connectivity of the medial orbitofrontal cortex Brodmann area 13. Thus in depression there is decreased reward-related and memory system functional connectivity, and this is related to the depressed symptoms. The lateral orbitofrontal cortex Brodmann area 47/12, involved in non-reward and punishing events, did not have this reduced functional connectivity with memory systems. Second, the lateral orbitofrontal cortex Brodmann area 47/12 had increased functional connectivity with the precuneus, the angular gyrus, and the temporal visual cortex Brodmann area 21. This enhanced functional connectivity of the non-reward/punishment system (Brodmann area 47/12) with the precuneus (involved in the sense of self and agency), and the angular gyrus (involved in language) is thus related to the explicit affectively negative sense of the self, and of self-esteem, in depression. A comparison of the functional connectivity in 185 depressed patients not receiving medication and 182 patients receiving medication showed that the functional connectivity of the lateral orbitofrontal cortex Brodmann area 47/12 with these three brain areas was lower in the medicated than the unmedicated patients. This is consistent with the hypothesis that the increased functional connectivity of the lateral orbitofrontal cortex Brodmann area 47/12 is related to depression. Relating the changes in cortical connectivity to our understanding of the functions of different parts of the orbitofrontal cortex in emotion helps to provide new insight into the brain changes related to depression.
BERT and PALs: Projected Attention Layers for Efficient Adaptation in Multi-Task Learning
Multi-task learning allows the sharing of useful information between multiple related tasks. In natural language processing several recent approaches have successfully leveraged unsupervised pre-training on large amounts of data to perform well on various tasks, such as those in the GLUE benchmark (Wang et al., 2018a). These results are based on fine-tuning on each task separately. We explore the multi-task learning setting for the recent BERT (Devlin et al., 2018) model on the GLUE benchmark, and how to best add task-specific parameters to a pre-trained BERT network, with a high degree of parameter sharing between tasks. We introduce new adaptation modules, PALs or ‘projected attention layers’, which use a low-dimensional multi-head attention mechanism, based on the idea that it is important to include layers with inductive biases useful for the input domain. By using PALs in parallel with BERT layers, we match the performance of finetuned BERT on the GLUE benchmark with ≈7 times fewer parameters, and obtain state-of-theart results on the Recognizing Textual Entailment dataset.
Polygenic Influence on Educational Attainment: New evidence from The National Longitudinal Study of Adolescent to Adult Health.
Recent studies have begun to uncover the genetic architecture of educational attainment. We build on this work using genome-wide data from siblings in the National Longitudinal Study of Adolescent to Adult Health (Add Health). We measure the genetic predisposition of siblings to educational attainment using polygenic scores. We then test how polygenic scores are related to social environments and educational outcomes. In Add Health, genetic predisposition to educational attainment is patterned across the social environment. Participants with higher polygenic scores were more likely to grow up in socially advantaged families. Even so, the previously published genetic associations appear to be causal. Among pairs of siblings, the sibling with the higher polygenic score typically went on to complete more years of schooling as compared to their lower-scored co-sibling. We found subtle differences between sibling fixed effect estimates of the genetic effect versus those based on unrelated individuals.
Using Augmented Reality to Plan Virtual Construction Worksite
Current construction worksite layout planning heavily relies on 2D paper media where the worksite planners sketch the future layout adjacent to their real environment. This traditional approach turns out to be ineffective and prone to error because only experienced and well-trained planners are able to generate the effective layout design with paper sketch. Augmented Reality (AR), as a new user interface technology, introduces a completely new perspective for construction worksite planning. This paper disucsses the related AR work and issues in construction and describes the concept and prototype of an AR-based construction planning tool, AR Planner with virtual elements sets and tangible interface. The focus of the paper is to identify and integrate worksite planning rules into the AR planner with the purpose of intelligently preventing potential planning errors and process inefficiency, thus maximizing the overall productivity. Future work includes refining and verifying AR Planner in realistic projects.
Perturbing Slivers in 3D Delaunay Meshes
Isotropic tetrahedron meshes generated by Delaunay triangulations are known to contain a majority of well-shaped tetrahedra, as well as spurious sliver tetrahedra. As the slivers hamper stability of numerical simulations we aim at removing them while keeping the triangulation Delaunay for simplicity. The solution which explicitly perturbs the slivers through random vertex relocation and Delaunay connectivity update is very effective but slow. In this paper we present a perturbation algorithm which favors deterministic over random perturbation. The added value is an improved efficiency and effectiveness. Our experimental study applies the proposed algorithm to meshes obtained by Delaunay refinement as well as to carefully optimized meshes.
Latent-space Physics: Towards Learning the Temporal Evolution of Fluid Flow
We propose a method for the data-driven inference of temporal evolutions of physical functions with deep learning. More specifically, we target fluid flow problems, and we propose a novel LSTM-based approach to predict the changes of the pressure field over time. The central challenge in this context is the high dimensionality of Eulerian space-time data sets. We demonstrate for the first time that dense 3D+time functions of physics system can be predicted within the latent spaces of neural networks, and we arrive at a neural-network based simulation algorithm with significant practical speed-ups. We highlight the capabilities of our method with a series of complex liquid simulations, and with a set of single-phase buoyancy simulations. With a set of trained networks, our method is more than two orders of magnitudes faster than a traditional pressure solver. Additionally, we present and discuss a series of detailed evaluations for the different components of our algorithm.
The Essential Nature of Product Traceability and its Relation to Agile Approaches
This is a discussion of the essential features of product development traceability maps in relation to requirements, architecture, functional models, components, and tests as a set of order type hierarchies and their cross-links. This paper lays out the structure of these ideal traceability relationships that define the essence of the product under development. The importance of the trace relationships to the product is clarified and then the abandonment of traceability in the Agile approach is discussed. Following that, a way to transform between synthetic canonical narrative (story) representations that appear in the product backlog and the traditionally separate hierarchical form of the trace structure of the product will be examined. The fact that it is possible to transform back and forth between the canonic narrative and traditional hierarchical representations of trace structures, and the fact that trace structures can be produced in a ‘just in time’ fashion that evolves during product development demonstrate that these trace structures can be used in both an Agile and Lean fashion within the development process. Also, we can show that when the trace structure is produced outside the narrative representation it can have the additional benefit of helping to determine the precedent order of development so that rework can be avoided. The lack of the extrinsic external trace structure of the product that gives access to its intelligibility is, in fact, a form of technical debt. Thus, traditional trace structures using this model can be seen as an essential tool for product owners to produce sound and coherent development narratives and for structuring and prioritizing the backlog in the Agile and Lean approaches to software and systems development. © 2014 The Authors. Published by Elsevier B.V. Selection and peer-review under responsibility of the University of Southern California.
Hardware for Machine Learning : Challenges and Opportunities ( Invited Paper )
Machine learning plays a critical role in extracting meaningful information out of the zetabytes of sensor data collected every day. For some applications, the goal is to analyze and understand the data to identify trends (e.g., surveillance, portable/wearable electronics); in other applications, the goal is to take immediate action based the data (e.g., robotics/drones, self-driving cars, smart Internet of Things). For many of these applications, local embedded processing near the sensor is preferred over the cloud due to privacy or latency concerns, or limitations in the communication bandwidth. However, at the sensor there are often stringent constraints on energy consumption and cost in addition to throughput and accuracy requirements. Furthermore, flexibility is often required such that the processing can be adapted for different applications or environments (e.g., update the weights and model in the classifier). In many applications, machine learning often involves transforming the input data into a higher dimensional space, which, along with programmable weights, increases data movement and consequently energy consumption. In this paper, we will discuss how these challenges can be addressed at various levels of hardware design ranging from architecture, hardware-friendly algorithms, mixed-signal circuits, and advanced technologies (including memories and sensors).
The relationship between dimensions of love, personality, and relationship length.
The present study examined the associations among participant demographics, personality factors, love dimensions, and relationship length. In total, 16,030 participants completed an internet survey assessing Big Five personality factors, Sternberg's three love dimensions (intimacy, passion, and commitment), and the length of time that they had been involved in a relationship. Results of structural equation modeling (SEM) showed that participant age was negatively associated with passion and positively associated with intimacy and commitment. In addition, the Big Five factor of Agreeableness was positively associated with all three love dimensions, whereas Conscientiousness was positively associated with intimacy and commitment. Finally, passion was negatively associated with relationship length, whereas commitment was positively correlated with relationship length. SEM results further showed that there were minor differences in these associations for women and men. Given the large sample size, our results reflect stable associations between personality factors and love dimensions. The present results may have important implications for relationship and marital counseling. Limitations of this study and further implications are discussed.
X-ray imaging physics for nuclear medicine technologists. Part 2: X-ray interactions and image formation.
The purpose is to review in a 4-part series: (i) the basic principles of x-ray production, (ii) x-ray interactions and data capture/conversion, (iii) acquisition/creation of the CT image, and (iv) operational details of a modern multislice CT scanner integrated with a PET scanner. In part 1, the production and characteristics of x-rays were reviewed. In this article, the principles of x-ray interactions and image formation are discussed, in preparation for a general review of CT (part 3) and a more detailed investigation of PET/CT scanners in part 4.
AVFI: Fault Injection for Autonomous Vehicles
Autonomous vehicle (AV) technology is rapidly becoming a reality on U.S. roads, offering the promise of improvements in traffic management, safety, and the comfort and efficiency of vehicular travel. With this increasing popularity and ubiquitous deployment, resilience has become a critical requirement for public acceptance and adoption. Recent studies into the resilience of AVs have shown that though the AV systems are improving over time, they have not reached human levels of automation. Prior work in this area has studied the safety and resilience of individual components of the AV system (e.g., testing of neural networks powering the perception function). However, methods for holistic end-to-end resilience assessment of AV systems are still non-existent.
Poisson surface reconstruction
We show that surface reconstruction from oriented points can be cast as a spatial Poisson problem. This Poisson formulation considers all the points at once, without resorting to heuristic spatial partitioning or blending, and is therefore highly resilient to data noise. Unlike radial basis function schemes, our Poisson approach allows a hierarchy of locally supported basis functions, and therefore the solution reduces to a well conditioned sparse linear system. We describe a spatially adaptive multiscale algorithm whose time and space complexities are proportional to the size of the reconstructed model. Experimenting with publicly available scan data, we demonstrate reconstruction of surfaces with greater detail than previously achievable.
The Effect of Pictorial Illusion on Prehension and Perception
The present study examined the effect of a size-contrast illusion (Ebbinghaus or Titchener Circles Illusion) on visual perception and the visual control of grasping movements. Seventeen right-handed participants picked up and, on other trials, estimated the size of fipoker-chipfl disks, which functioned as the target circles in a three-dimensional version of the illusion. In the estimation condition, subjects indicated how big they thought the target was by separating their thumb and forefinger to match the target's size. After initial viewing, no visual feedback from the hand or the target was available. Scaling of grip aperture was found to be strongly correlated with the physical size of the disks, while manual estimations of disk size were biased in the direction of the illusion. Evidently, grip aperture is calibrated to the true size of an object, even when perception of object size is distorted by a pictorial illusion, a result that is consistent with recent suggestions that visually guided prehension and visual perception are mediated by separate visual pathways.
[Skew deviation. Strabismological diagnosis and treatment alternatives].
BACKGROUND We undertook this study to analyze diagnostic and treatment alternatives in patients with skew deviation (SD). METHODS This is a prospective, observational and longitudinal study of patients with SD. The study took place in a third-level medical center during the period from September 2007 to May 2008. Strabismological exploration, multidisciplinary diagnosis and treatment alternatives were analyzed. RESULTS Ten patients presenting SD were studied. Diagnoses were multiple sclerosis, arteriovenous malformation, epilepsy, hydrocephalus, ischemic encephalopathy, cortical atrophy, hypoplasia of corpus callosum and thalamic hemorrhage. Psychomotor retardation was present in 80%. Other diagnoses were Cogan apraxia, Parinaud syndrome, see-saw nystagmus, Foville syndrome, and hemiplegic alterations. Related strabismuses were exotropia (5), esotropia (3), hypertropia (2), and dissociated vertical deviation (1). Lesions of II, III and VII cranial nerves were found. CONCLUSIONS Complete strabological study allows a better diagnosis of the lesion and consequently relapsing disease in order to achieve a better treatment according to each patient. Optical rehabilitation and botulinum applications are especially indicated.
Internet Addiction Symptoms, Disordered Eating, and Body Image Avoidance
Internet addiction is an increasing concern among young adults. Self-presentational theory posits that the Internet offers a context in which individuals are able to control their image. Little is known about body image and eating concerns among pathological Internet users. The aim of this study was to explore the association between Internet addiction symptoms, body image esteem, body image avoidance, and disordered eating. A sample of 392 French young adults (68 percent women) completed an online questionnaire assessing time spent online, Internet addiction symptoms, disordered eating, and body image avoidance. Fourteen men (11 percent) and 26 women (9.7 percent) reported Internet addiction. Body image avoidance was associated with Internet addiction symptoms among both genders. Controlling for body-mass index, Internet addiction symptoms, and body image avoidance were both significant predictors of disordered eating among women. These findings support the self-presentational theory of Internet addiction and suggest that body image avoidance is an important factor.
Traction Control for a Rocker-Bogie Robot with Wheel-Ground Contact Angle Estimation
(1) Robotics and Automation Laboratory, Department of Mechanical Engineering, Faculty of Engineering, Chulalongkorn University, Phayathai Rd. Prathumwan, Bangkok 10330 http://161.200.80.142/mech, Thailand Abstract A method for kinematics modeling of a six-wheel Rocker-Bogie mobile robot is described in detail. The forward kinematics is derived by using wheel Jacobian matrices in conjunction with wheel-ground contact angle estimation. The inverse kinematics is to obtain the wheel velocities and steering angles from the desired forward velocity and turning rate of the robot. Traction Control is also developed to improve traction by comparing information from onboard sensors and wheel velocities to minimize wheel slip. Finally, a simulation of a small robot using rockerbogie suspension has been performed and simulate in two conditions of surfaces including climbing slope and travel over a ditch.
Internal Dynamic Partial Reconfiguration for Real Time Signal Processing on FPGA
Few FPGAs support creation of partially reconfigurable systems when compared to traditional systems based on total reconfiguration. This allows dynamic change of the functionalities hosted on the device when needed and while the rest of the system continues its working. Runtime partial reconfiguration of FPGA is an attractive feature which offers countless benefits across multiple industries. Xilinx has supported partial reconfiguration for many generations of devices. This can be taken advantage of substituting inactive parts of hardware systems and to adapt the complete chip a different requirement of an application. This paper describes an innovative implementation for real time audio and video processing using run time internal partial reconfiguration. System is implemented on Virtex-4 FPGA. Internal reconfiguration is handled using internal configuration access port (ICAP) driven by soft processor core. The considerable savings in device resources, bit stream size and configuration time is observed and tabulated in this paper.
A randomized phase II study of pazopanib in hormone sensitive prostate cancer: a University of Chicago Phase II Consortium/ Department of Defense Prostate Cancer Clinical Trials Consortium study
Background:Intermittent androgen suppression (IAS) is an increasingly popular treatment option for castrate-sensitive prostate cancer. On the basis of previous data with anti-angiogenic strategies, we hypothesized that pan-inhibition of the vascular endothelial growth factor receptor using pazopanib during the IAS off period would result in prolonged time to PSA failure.Methods:Men with biochemically recurrent prostate cancer, whose PSA was <0.5 ng ml−1 after 6 months of androgen deprivation therapy were randomized to pazopanib 800 mg daily or observation. The planned primary outcome was time to PSA progression >4.0 ng ml−1.Results:Thirty-seven patients were randomized. Of 18 patients randomized to pazopanib, at the time of study closure, 4 had progressive disease, 1 remained on treatment and 13 (72%) electively disenrolled, the most common reason being patient request due to grade 1/2 toxicity (8 patients). Two additional patients were removed from treatment due to adverse events. Of 19 patients randomized to observation, at the time of study closure, 4 had progressive disease, 7 remained under protocol-defined observation and 8 (42%) had disenrolled, most commonly due to non-compliance with protocol visits (3 patients). Because of high dropout rates in both arms, the study was halted.Conclusions:IAS is a treatment approach that may facilitate investigation of novel agents in the hormone-sensitive state. This trial attempted to investigate the role of antiangiogenic therapy in this setting, but encountered several barriers, including toxicities and patient non-compliance, which can make implementation of such a study difficult. Future investigative efforts in this arena should carefully consider drug toxicity and employ a design that maximizes patient convenience to reduce the dropout rate.