title
stringlengths
8
300
abstract
stringlengths
0
10k
Direct versus indirect loading of orthodontic miniscrew implants—an FEM analysis
The mesialization of molars in the lower jaw represents a particularly demanding scenario for the quality of orthodontic anchorage. The use of miniscrew implants has proven particularly effective; whereby, these orthodontic implants are either directly loaded (direct anchorage) or employed indirectly to stabilize a dental anchorage block (indirect anchorage). The objective of this study was to analyze the biomechanical differences between direct and indirect anchorage and their effects on the primary stability of the miniscrew implants. For this purpose, several computer-aided design/computer-aided manufacturing (CAD-CAM)-models were prepared from the CT data of a 21-year-old patient, and these were combined with virtually constructed models of brackets, arches, and miniscrew implants. Based on this, four finite element method (FEM) models were generated by three-dimensional meshing. Material properties, boundary conditions, and the quality of applied forces (direction and magnitude) were defined. After solving the FEM equations, strain values were recorded at predefined measuring points. The calculations made using the FEM models with direct and indirect anchorage were statistically evaluated. The loading of the compact bone in the proximity of the miniscrew was clearly greater with direct than it was with indirect anchorage. The more anchor teeth were integrated into the anchoring block with indirect anchorage, the smaller was the peri-implant loading of the bone. Indirect miniscrew anchorage is a reliable possibility to reduce the peri-implant loading of the bone and to reduce the risk of losing the miniscrew. The more teeth are integrated into the anchoring block, the higher is this protective effect. In clinical situations requiring major orthodontic forces, it is better to choose an indirect anchorage in order to minimize the risk of losing the miniscrew.
Hybrid metric-Palatini gravity
Recently, the phenomenology of f(R) gravity has been scrutinized. This scrutiny has been motivated by the possibility to account for the self-accelerated cosmic expansion without invoking dark energy sources. Besides, this kind of modified gravity is capable of addressing the dynamics of several self-gravitating systems alternatively to the presence of dark matter. It has been established that both metric and Palatini versions of these theories have interesting features but also manifest severe and different downsides. A hybrid combination of theories, containing elements from both these two formalisms, turns out to be also very successful accounting for the observed phenomenology and is able to avoid some drawbacks of the original approaches. This article reviews the formulation of this hybrid metric-Palatini approach and its main achievements in passing the local tests and in applications to astrophysical and cosmological scenarios, where it provides a unified approach to the problems of dark energy and dark matter.
Adoptive immunotherapy with allodepleted donor T-cells improves immune reconstitution after haploidentical stem cell transplantation.
Poor T lymphocyte reconstitution limits the use of haploidentical stem cell transplantation (SCT) because it results in a high mortality from viral infections. One approach to overcome this problem is to infuse donor T cells from which alloreactive lymphocytes have been selectively depleted, but the immunologic benefit of this approach is unknown. We have used an anti-CD25 immunotoxin to deplete alloreactive lymphocytes and have compared immune reconstitution after allodepleted donor T cells were infused at 2 dose levels into recipients of T-cell-depleted haploidentical SCT. Eight patients were treated at 10(4) cells/kg/dose, and 8 patients received 10(5) cells/kg/dose. Patients receiving 10(5) cells/kg/dose showed significantly improved T-cell recovery at 3, 4, and 5 months after SCT compared with those receiving 10(4) cells/kg/dose (P < .05). Accelerated T-cell recovery occurred as a result of expansion of the effector memory (CD45RA(-)CCR-7(-)) population (P < .05), suggesting that protective T-cell responses are likely to be long lived. T-cell-receptor signal joint excision circles (TRECs) were not detected in reconstituting T cells in dose-level 2 patients, indicating they are likely to be derived from the infused allodepleted cells. Spectratyping of the T cells at 4 months demonstrated a polyclonal Vbeta repertoire. Using tetramer and enzyme-linked immunospot (ELISPOT) assays, we have observed cytomegalovirus (CMV)- and Epstein-Barr virus (EBV)-specific responses in 4 of 6 evaluable patients at dose level 2 as early as 2 to 4 months after transplantation, whereas such responses were not observed until 6 to 12 months in dose-level 1 patients. The incidence of significant acute (2 of 16) and chronic graft-versus-host disease (GVHD; 2 of 15) was low. These data demonstrate that allodepleted donor T cells can be safely used to improve T-cell recovery after haploidentical SCT and may broaden the applicability of this approach.
Contour-clamped homogeneous electric field electrophoresis of Staphylococcus aureus
Contour-clamped homogeneous electric field (CHEF) electrophoresis is a technique of pulsed-field gel electrophoresis that enables the resolution of large fragments of DNA that cannot be resolved by conventional gel electrophoresis. The procedure involves the application of controlled electric fields that change direction at a predetermined angle to samples of DNA that have been embedded in an agarose gel matrix and digested with a restriction endonuclease. Adjustment of the electrophoresis conditions enables the separation of DNA fragments with lengths from 10 kilobases up to 9 megabases in a size-dependent manner in agarose gels. The banding patterns can be used for epidemiological typing, the separated DNA can be immobilized onto a membrane and used for genetic mapping, or individual fragments can be extracted and used for downstream genetic manipulations. The protocol requires specialized equipment and can be completed in a maximum of 7 days.
Coordination Mechanisms in Human-Robot Collaboration
Robots are envisioned to collaborate with people in tasks that require physical manipulation such as a robot instructing a human in assembling household furniture, a human teaching a robot how to repair machinery, or a robot and a human collaboratively completing construction work. These scenarios characterize joint actions in which the robot and the human must effectively communicate and coordinate their actions with each other in order to successfully achieve task goals. Drawing on recent research in cognitive sciences on joint action, this paper discusses key mechanisms for effective coordination—joint attention, action observation, task-sharing, action coordination, and perception of agency—toward informing the design of communication and coordination mechanisms for robots. It presents two illustrative studies that explore how robot behavior might be designed to employ these mechanisms, particularly joint attention and action observation, to improve measures of task performance and perceptions of the robot in human-robot collaboration.
A decision support based on data mining in e-banking
The use of data mining techniques in banking domain is suitable due to the nature and sensitivity of bank data and due to the real time complex decision processes. The main concern for a bank's manager is to take good decisions in order to minimize the risks level associated to bank's activities. It is very important for a bank to have knowledge of causes which generate the financial crises or imbalances. Lending is one of the most risky activities in banking area and adequate methods to support the decision making process are necessary. In this paper the authors present a prototype decision support system based on data mining techniques used in lending process. The proposed system was designed to assist a customer who applies for a credit and it may represent an extension for e-banking activities.
Long-term oral administration of amrinone for congestive heart failure: lack of efficacy in a multicenter controlled trial.
A number of uncontrolled studies have indicated that oral administration of amrinone, a phosphodiesterase inhibitor with potent positive inotropic effects in experimental preparations, may be beneficial in patients with chronic congestive heart failure. The present multicenter trial was designed to prospectively evaluate clinical response and change in exercise tolerance during 12 weeks of amrinone therapy in a double-blind, placebo-controlled protocol. Ninety-nine patients with NYHA functional class 3 or 4 congestive heart failure on digitalis and diuretics, of whom 31 were also receiving captopril, were enrolled. After baseline clinical assessment and determination of exercise tolerance, radionuclide left ventricular ejection fraction, and roentgenographic cardiothoracic ratio, patients were randomly assigned to receive amrinone or placebo, beginning at 1.5 mg/kg tid and increasing to a maximum dosage of 200 mg tid. After 12 weeks of therapy or at the last blinded evaluation in patients who did not complete this protocol, there were no significant differences from baseline values between treatment with amrinone or placebo with regard to symptoms, NYHA functional class, left ventricular ejection fraction, cardiothoracic ratio, frequency and severity of ventricular ectopy, or mortality. Exercise tolerance improved significantly from baseline by 37 +/- 10% (mean 163 sec) in patients on amrinone and 35 +/- 11% (mean 149 sec) in patients on placebo, but there was no significant difference between treatments. Adverse reactions were significantly more frequent and more severe on amrinone, occurring in 83% of patients and necessitating withdrawal in 34%.(ABSTRACT TRUNCATED AT 250 WORDS)
Visible threads: a smart VR interface to digital libraries
The importance of information as a resource for economic growth and education is steadily increasing. Due to technological advances in computer industry and the explosive growth of the Internet much valuable information will be available in digital libraries. This paper introduces a system that aims to support a user's browsing activities in document sets retrieved from a digital library. Latent Semantic Analysis is applied to extract salient semantic structures and citation patterns of documents stored in a digital library in a computationally expensive batch job. At retrieval time, cluster techniques are used to organize retrieved documents into clusters according to the previously extracted semantic similarities. A modified Boltzman algorithm [1] is employed to spatially organize the resulting clusters and their documents in the form of a three-dimensional information landscape or "i-scape". The i-scape is then displayed for interactive exploration via a multi-modal, virtual reality CAVE interface [8]. Users' browsing activities are recorded and user models are extracted to give newcomers online help based on previous navigation activity as well as to enable experienced users to recognize and exploit past user traces. In this way, the system provides interactive services to assist users in the spatial navigation, interpretation, and detailed exploration of potentially large document sets matching a query.
Cs229 Problem Set #3 Solutions Cs 229, Autumn 2015 Problem Set #3 Solutions: Theory & Unsuper- Vised Learning
Notes: (1) These questions require thought, but they do not require long answers. (2) If you have a question about this homework, we encourage you to post your question on our Piazza forum, at https://piazza.com/stanford/fall2015/cs229. (3) If you missed the first lecture or are unfamiliar with the collaboration or honor code policy, please read the policy on Handout #1 (available from the course website) before starting work. (4) For problems that require programming, please include in your submission a printout of your code (with comments) and any figures that you are asked to plot. (5) Please do not just write the answers but also show your work. (6) If you are an on-campus (non-SCPD) student, please print, fill out, and include a copy of the cover sheet (enclosed as the final page of this document), and include the cover sheet as the first page of your submission. SCPD students: If you are submitting on time without using late-days, please submit your assignments through the SCPD office. Otherwise, please submit your assignments at https://www. stanford.edu/class/cs229/cgi-bin/submit.php as a single PDF file under 20MB in size. If you have trouble submitting online, email your submission to [email protected]. However, we strongly recommend using the website submission method, as it will provide confirmation of submission and also allow us to track and return your graded homework to you more easily. 1. [23 points] Uniform convergence You are hired by CNN to help design the sampling procedure for making CNN's electoral predictions for the next presidential election in the (fictitious) country of Elbania. The country of Elbania is organized into states, and there are only two candidates running in this election: One from the Elbanian Democrat party and another from the Labor Party of Elbania. The plan for making our electoral predictions is as follows: We'll sample m voters from each state and ask whether they're voting Democrat. We'll then publish, for each state, the estimated fraction of Democrat voters. In this problem, we'll work out how many voters we need to sample in order to ensure that we get good predictions with high probability. One reasonable goal might be to set m large enough that, with high probability, we obtain uniformly accurate estimates of the fraction of Democrat voters in every state. But this might require surveying very many people, which would be prohibitively expensive. So, we're instead going to demand only a slightly lower …
Customized computer-aided application mapping on NoC infrastructure using multi-objective optimization
Network-on-chips (NoC) is considered the next generation of communication infrastructure in embedded systems, which are omnipresent in different environments, such as cars, cell phones, and digital cameras. In the platform-based design methodology, an application is implemented by a set of collaborating intellectual properties (IPs) blocks. The selection of the most suited set of IPs as well as their physical mapping onto the NoC infrastructure to implement efficiently the application at hand are two hard combinatorial problems. In this paper, we propose the use of multi-objective evolutionary algorithms to perform the assignment and mapping stages of any given application on a customized NoC infrastructure. The resulting NoC platform is custom-cut for the application at hand. Only the actually used resources, switches and channels by the application mapping are part of the customized implementation platform. The optimization is driven by the minimization of required hardware area, the imposed execution time and the necessary power consumption of the final implementation, and yet avoiding hot spots. 2010 Elsevier B.V. All rights reserved.
Pediatric toxicology: specialized approach to the poisoned child.
The poisoned child presents unique considerations in circumstances of exposure, clinical effects, diagnostic approach, and therapeutic interventions. The emergency provider must be aware of the pathophysiologic vulnerabilities of infants and children and substances that are especially toxic. Awareness is essential for situations in which the risk of morbidity and mortality is increased, such as child abuse by poisoning. Considerations in treatment include the need for attentive supportive care, pediatric implications for antidotal therapy, and extracorporeal removal methods such as hemodialysis in children. In this article, each of these issues and emerging poison hazards are discussed.
SPICE simulation of surface acoustic wave interdigital transducers
Surface Acoustic Wave (SAW) devices, are not normally amenable to simulation through circuit simulators. In this letter, an electrical macromodel of Mason's Equivalent Circuit for an interdigital transducer (IDT) is proposed which is compatible to a widely used general purpose circuit simulator SPICE endowed with the capability to handle negative capacitances and inductances. Illustrations have been given to demonstrate the simplicity of ascertaining the frequency and time domain characteristics of IDT and amenability to simulate the IDT along with other external circuit elements.<<ETX>>
Large-scale Unit Commitment under uncertainty
The Unit Commitment problem in energy management aims at finding the optimal productions schedule of a set of generation units while meeting various system-wide constraints. It has always been a large-scale, non-convex difficult problem, especially in view of the fact that operational requirements imply that it has to be solved in an unreasonably small time for its size. Recently, the ever increasing capacity for renewable generation has strongly increased the level of uncertainty in the system, making the (ideal) Unit Commitment model a large-scale, non-convex, uncertain (stochastic, robust, chance-constrained) program. We provide a survey of the literature on methods for the Uncertain Unit Commitment problem, in all its variants. We start with a review of the main contributions on solution methods for the deterministic versions of the problem, focusing on those based on mathematical programming techniques that are more relevant for the uncertain versions of the problem. We then present and categorize the approaches to the latter, also providing entry points to the relevant literature on optimization under uncertainty.
Deep Reinforcement Learning for NLP
Many Natural Language Processing (NLP) tasks (including generation, language grounding, reasoning, information extraction, coreference resolution, and dialog) can be formulated as deep reinforcement learning (DRL) problems. However, since language is often discrete and the space for all sentences is infinite, there are many challenges for formulating reinforcement learning problems of NLP tasks. In this tutorial, we provide a gentle introduction to the foundation of deep reinforcement learning, as well as some practical DRL solutions in NLP. We describe recent advances in designing deep reinforcement learning for NLP, with a special focus on generation, dialogue, and information extraction. We discuss why they succeed, and when they may fail, aiming at providing some practical advice about deep reinforcement learning for solving real-world NLP problems. 1 Tutorial Description Deep Reinforcement Learning (DRL) (Mnih et al., 2015) is an emerging research area that involves intelligent agents that learn to reason in Markov Decision Processes (MDP). Recently, DRL has achieved many stunning breakthroughs in Atari games (Mnih et al., 2013) and the Go game (Silver et al., 2016). In addition, DRL methods have gained significantly more attentions in NLP in recent years, because many NLP tasks can be formulated as DRL problems that involve incremental decision making. DRL methods could easily combine embedding based representation learning with reasoning, and optimize for a variety of non-differentiable rewards. However, a key challenge for applying deep reinforcement learning techniques to real-world sized NLP problems is the model design issue. This tutorial draws connections from theories of deep reinforcement learning to practical applications in NLP. In particular, we start with the gentle introduction to the fundamentals of reinforcement learning (Sutton and Barto, 1998; Sutton et al., 2000). We further discuss their modern deep learning extensions such as Deep QNetworks (Mnih et al., 2015), Policy Networks (Silver et al., 2016), and Deep Hierarchical Reinforcement Learning (Kulkarni et al., 2016). We outline the applications of deep reinforcement learning in NLP, including dialog (Li et al., 2016), semi-supervised text classification (Wu et al., 2018), coreference (Clark and Manning, 2016; Yin et al., 2018), knowledge graph reasoning (Xiong et al., 2017), text games (Narasimhan et al., 2015; He et al., 2016a), social media (He et al., 2016b; Zhou and Wang, 2018), information extraction (Narasimhan et al., 2016; Qin et al., 2018), language and vision (Pasunuru and Bansal, 2017; Misra et al., 2017; Wang et al., 2018a,b,c; Xiong et al., 2018), etc. We further discuss several critical issues in DRL solutions for NLP tasks, including (1) The efficient and practical design of the action space, state space, and reward functions; (2) The trade-off between exploration and exploitation; and (3) The goal of incorporating linguistic structures in DRL. To address the model design issue, we discuss several recent solutions (He et al., 2016b; Li et al., 2016; Xiong et al., 2017). We then focus on a new case study of hierarchical deep reinforcement learning for video captioning (Wang et al., 2018b), discussing the techniques of leveraging hierarchies in DRL for NLP generation problems. This tutorial aims at introducing deep reinforcement learning methods to researchers in the NLP community. We do not assume any particular prior knowledge in reinforcement learning. The intended length of the tutorial is 3 hours, including a coffee break.
Self quotient image for face recognition
The reliability of facial recognition techniques is often affected by the variation of illumination, such as shadows and illumination direction changes. In this paper, we present a novel framework, called the self-quotient image, for the elimination of the lighting effect in the image. Although this method has a similar invariant form to the quotient image by Shashua etc. (2001), it does not need the alignment and bootstrap images. Our method combines the image processing technique of edge-preserved filtering with the Retinex applications of by Jobson, et al., (1997) and Gross and Brajovie (2003). We have analyzed this algorithm with a 3D imaging model and formulated the conditions where illumination-invariant and -variant properties can be realized, respectively. A fast anisotropic filter is also presented. The experiment results show that our method is effective in removing the effect of illumination for robust face recognition.
Operation of Compressor and Electronic Expansion Valve via Different Controllers
The most critical problem in the world is to meet the energy demand, because of steadily increasing energy consumption. Refrigeration systems` electricity consumption has big portion in overall consumption. Therefore, considerable attention has been given to refrigeration capacity modulation system in order to decrease electricity consumption of these systems. Capacity modulation is used to meet exact amount of load at partial load and lowered electricity consumption by avoiding over capacity using. Variable speed refrigeration systems are the most common capacity modulation method for commercially and household purposes. Although the vapor compression refrigeration designed to satisfy the maximum load, they work at partial load conditions most of their life cycle and they are generally regulated as on/off controlled. The experimental chiller system contains four main components: compressor, condenser, expansion device, and evaporator in Fig.1 where this study deals with effects of different control methods on variable speed compressor (VSC) and electronic expansion valve (EEV). This chiller system has a scroll type VSC and a stepper motor controlled EEV.
ETS gene fusions in prostate cancer: from discovery to daily clinical practice.
CONTEXT In 2005, fusions between the androgen-regulated transmembrane protease serine 2 gene, TMPRSS2, and E twenty-six (ETS) transcription factors were discovered in prostate cancer. OBJECTIVE To review advances in our understanding of ETS gene fusions, focusing on challenges affecting translation to clinical application. EVIDENCE ACQUISITION The PubMed database was searched for reports on ETS fusions in prostate cancer. EVIDENCE SYNTHESIS Since the discovery of ETS fusions, novel 5' and 3' fusion partners and multiple splice isoforms have been reported. The most common fusion, TMPRSS2:ERG, is present in approximately 50% of prostate-specific antigen (PSA)-screened localized prostate cancers and in 15-35% of population-based cohorts. ETS fusions can be detected noninvasively in the urine of men with prostate cancer, with a specificity rate in PSA-screened cohorts of >90%. Reports from untreated population-based cohorts suggest an association between ETS fusions and cancer-specific death and metastatic spread. In retrospective prostatectomy cohorts, conflicting results have been published regarding associations between ETS fusions and cancer aggressiveness. In addition to serving as a potential biomarker, tissue and functional studies suggest a specific role for ETS fusions in the transition to carcinoma. Finally, recent results suggest that the 5' and 3' ends of ETS fusions as well as downstream targets may be targeted therapeutically. CONCLUSIONS Recent studies suggest that the first clinical applications of ETS fusions are likely to be in noninvasive detection of prostate cancer and in aiding with difficult diagnostic cases. Additional studies are needed to clarify the association between gene fusions and cancer aggressiveness, particularly those studies that take into account the multifocal and heterogeneous nature of localized prostate cancer. Multiple promising strategies have been identified to potentially target ETS fusions. Together, these results suggest that ETS fusions will affect multiple aspects of prostate cancer diagnosis and management.
The effect of insulin and insulin-like growth factors on hippocampus- and amygdala-dependent long-term memory formation.
Recent work has reported that the insulin-like growth factor 2 (IGF2) promotes memory enhancement. Furthermore, impaired insulin or IGF1 functions have been suggested to play a role in the pathogenesis of neurodegeneration and cognitive impairments, hence implicating the insulin/IGF system as an important target for cognitive enhancement and/or the development of novel treatments against cognitive disorders. Here, we tested the effect of intracerebral injections of IGF1, IGF2, or insulin on memory consolidation and persistence in rats. We found that a bilateral injection of insulin into the dorsal hippocampus transiently enhances hippocampal-dependent memory and an injection of IGF1 has no effect. None of the three peptides injected into the amygdala affected memories critically engaging this region. Together with previous data on IGF2, these results indicate that IGF2 produces the most potent and persistent effect as a memory enhancer on hippocampal-dependent memories. We suggest that the memory-enhancing effects of insulin and IGF2 are likely mediated by distinct mechanisms.
Social perception of rape: how rape myth acceptance modulates the influence of situational factors.
This study assessed the role of rape myth acceptance (RMA) and situational factors in the perception of three different rape scenarios (date rape, marital rape, and stranger rape). One hundred and eighty-two psychology undergraduates were asked to emit four judgements about each rape situation: victim responsibility, perpetrator responsibility, intensity of trauma, and likelihood to report the crime to the police. It was hypothesized that neither RMA nor situational factors alone can explain how rape is perceived; it is the interaction between these two factors that best account for social reactions to sexual aggression. The results generally supported the authors' hypothesis: Victim blame, estimation of trauma, and the likelihood of reporting the crime to the police were best explained by the interaction between observer characteristics, such as RMA, and situational clues. That is, the less stereotypic the rape situation was, the greater was the influence of attitudes toward rape on attributions.
ICLR 2019 Deep Lagrangian Networks : Using Physics as Model Prior for Deep Learning
Deep learning has achieved astonishing results onmany taskswith large amounts of data and generalization within the proximity of training data. For many important real-world applications, these requirements are unfeasible and additional prior knowledge on the task domain is required to overcome the resulting problems. In particular, learning physics models for model-based control requires robust extrapolation from fewer samples – often collected online in real-time – and model errors may lead to drastic damages of the system. Directly incorporating physical insight has enabled us to obtain a novel deep model learning approach that extrapolates well while requiring fewer samples. As a first example, we propose Deep Lagrangian Networks (DeLaN) as a deep network structure upon which Lagrangian Mechanics have been imposed. DeLaN can learn the equations of motion of a mechanical system (i.e., system dynamics) with a deep network efficiently while ensuring physical plausibility. The resulting DeLaN network performs very well at robot tracking control. The proposed method did not only outperform previous model learning approaches at learning speed but exhibits substantially improved and more robust extrapolation to novel trajectories and learns online in real-time.
Brain storm-based Whale Optimization Algorithm for privacy-protected data publishing in cloud computing
Cloud computing serves as a major boost for the digital era since it handles data from a large number of users simultaneously. Besides the several useful characteristics, providing security to the data stored in the cloud platform is a major challenge for the service providers. Privacy preservation schemes introduced in the literature trying to enhance the privacy and utility of the data structures by modifying the database with the secret key. In this paper, an optimization scheme, Brain Storm based Whale Optimization Algorithm (BS-WOA), is introduced for identifying the secret key. The database from the data owner is modified with the optimal secret key for constructing the retrievable perturbation data for preserving the privacy and utility. The proposed BS-WOA is designed through the hybridization of Brain Storm Optimization and Whale Optimization Algorithm. Simulation of the proposed technique with the BS-WOA is done in the three standard databases, such as chess T10I4D100 K, and the retail databases. When evaluated for the key size of 256, the proposed BS-WOA achieved privacy value of 0.186 and utility value of 0.8777 for the chess database, and thus, has improved performance.
Ryanodine receptors: structure, expression, molecular details, and function in calcium release.
Ryanodine receptors (RyRs) are located in the sarcoplasmic/endoplasmic reticulum membrane and are responsible for the release of Ca(2+) from intracellular stores during excitation-contraction coupling in both cardiac and skeletal muscle. RyRs are the largest known ion channels (> 2MDa) and exist as three mammalian isoforms (RyR 1-3), all of which are homotetrameric proteins that interact with and are regulated by phosphorylation, redox modifications, and a variety of small proteins and ions. Most RyR channel modulators interact with the large cytoplasmic domain whereas the carboxy-terminal portion of the protein forms the ion-conducting pore. Mutations in RyR2 are associated with human disorders such as catecholaminergic polymorphic ventricular tachycardia whereas mutations in RyR1 underlie diseases such as central core disease and malignant hyperthermia. This chapter examines the current concepts of the structure, function and regulation of RyRs and assesses the current state of understanding of their roles in associated disorders.
Applying simulation in a consulting environment - tips from airport planners
Airport design is, by far, the largest practice area fo TransSolutions. This is a unique area, with man differences from typical simulation in manufacturing o logistics. Passengers and bags arrive at the airport by the pla load. At the busiest airports the arrivals and departures a coordinated, so that several flights arrive within a sho time. Then, after a short time on the ground the aircraft a depart again. A large number of passengers move throu the facility during the peak period between the fligh arrivals and the departures. After that, the facility will b empty until the next bank of arriving flights. The termina facilities must be sized to handle these waves of passeng throughout the day. Space within the terminal building is at a premium Most travelers are aware of the ticket counters, ga lounges, corridors and shops in the public areas. Hidd behind the scenes are the various offices, control room and break areas for the airport and airline employees. T baggage system and various maintenance functions take most of the space on the ramp level below the passen areas. The usual question asked by the architects is: “will th facility, as designed, satisfy the needs of the tena airlines?” This involves comparing the “performance” o the facility to minimum acceptable standards. A typica example is that all passengers connecting from an arrivi flight to a departing flight must be able to reach the departure gate within 30 minutes of arrival. Anothe
The ankle-foot orthosis improves balance and reduces fall risk of chronic spastic hemiparetic patients.
BACKGROUND Ankle foot orthoses (AFO) are commonly used orthotic device in order to restore the ankle foot function and to improve the balance and gait in post-stroke hemiparetic patients. However, there remain some discussions about their effectiveness on long term hemiparetic patients who had mild to moderate spasticity. AIM To investigate the relative effect of prefabricated thermoplastic posterior leaf spring AFO (PLS-AFO) on balance and fall risk. DESIGN A cross-over interventional study SETTING The Department of PMR of a tertiary hospital. POPULATION Twenty-five chronic post-stroke long duration hemiparetic patients who had Ashworth grade 1-2 spasticity at affected calf muscles and lower limb Brunnstrom stage 2-3 and also able to walk independently without an assistive device. METHODS Berg Balance Scale (BERG), and the postural stability test (PST) and the fall risk test (FRT) of Biodex balance systems were used for the assessments. All of the patients were assessed with AFO and without AFO. All assessments were made with footwear. RESULTS The mean post-stroke duration was 20,32±7,46 months. The BERG scores were 42,12±9,05 without AFO and 47,52±7,77 with AFO; the overall stability scores of FRT were 3,35±1,97 without AFO and 2,69±1,65 with AFO (P<0,001). CONCLUSION It was found that the prefabricated thermoplastic PLS-AFO improve balance and provide fall risk reduction in chronic post-stroke ambulatory hemiparetic patients who had mild to moderate spasticity on their affected lower limb. CLINICAL REHABILITATION IMPACT These results encourage the usage of AFO on long duration hemiparetic patients in order to provide better balance and lesser fall risk.
Phase I and Pharmacokinetic Study of the Dolastatin-15 Analogue Tasidotin ( ILX 651 ) Administered Intravenously on Days 1 , 3 , and 5 Every 3 Weeks in Patientswith Advanced Solid Tumors
Purpose: To determine the maximum tolerated dose (MTD), dose-limiting toxicity (DLT), and pharmacokinetics of tasidotin (ILX651), a dolastatin-15 analogue, when administered on days 1, 3, and 5 every 3 weeks in patients with advanced solid tumors. Patients and Methods:Thirty-two patients were treated with 92 courses of tasidotin through seven dose levels determined by a modified Fibonacci scheme ranging from 3.9 to 45.7 mg/m. Pharmacokinetic samples were collected during the first course. Results: Neutropenia was the principal DLTat the 45.7 mg/m/d dose level. In addition, one patient also experienced grade 3 neutropenia complicated with grade 3 esophageal candidiasis and grade 3 dehydration. Only 1of 11patients treated at the MTD, 34.4 mg/m, experienced dose-limiting neutropenia. Other common, drug-related toxicities included mild to moderate fatigue, anemia, nausea, anorexia, emesis, alopecia, and diarrhea. The best observed antitumor response consisted of stable disease andwas noted in10 patients (31%); the median duration on study for those patients with stable disease was 99.5 days compared with 37.5 days for those patients with progressive disease. Tasidotin plasma concentrations declined biphasically with an effective half-life of V55 minutes, andf11%was excreted unchanged in the urine. Conclusion:The recommended dose for phase II studies and the MTDwhen tasidotin is administered on days1, 3, and 5 every 3 weeks is 34.4 mg/m.The favorable toxicity profile of tasidotin comparedwith other antitubulin agents, includingother dolastatin analogues, and its novelmechanism of action support further disease-directed evaluation of this agent. Tubulin is a well-established target for anticancer agents. Although available antitubulin agents, including the taxanes and Vinca alkaloids, are highly effective in cancer therapy, their clinical usefulness is limited by intrinsic or acquired resistance and systemic toxicity (1). Thus, it is important to develop new agents that target the tubulin/microtubule system with efficacy against resistant tumors and an improved side effect profile. The dolastatins, a group of peptides isolated from the Indian Ocean sea hare Dolabella auricularia (2–5), bind to tubulin subunits and inhibit tubulin-dependent GTP hydrolysis in vitro (6). In vivo , these actions inhibit the assembly of new microtubules, induce the depolymerization of existing microtubules, and inhibit cell cycle progression in mitosis (7, 8). However, initial clinical evaluation of dolastatin-10 and cemadotin, a synthetic analogue of dolastatin-15, showed disappointing results (9–11), possibly due, in part, to poor cellular uptake. In addition, cemadotin is rapidly converted to a metabolite with dose-limiting cardiovascular toxicities, including hypertension, angina, and myocardial infarction, that limited its therapeutic efficacy (12–15). To address these problems, a new generation of dolastatins, represented by tasidotin (Genzyme Corp., San Antonio, TX), has been created that offer several advantages over most other antitubulins. Tasidotin is a pentapeptide (N,N-dimethylL-valyl-L-valyl-N -methyl-L-valyl-L-prolyl-L-proline-tert -butylamide hydrochloride; Fig. 1), chemically modified to improve the pharmacologic properties of cemadotin resulting in a more metabolically stable compound. Tasidotin induces a prolonged lag phase in microtubule assembly at concentrations of 25 to 40 Amol/L (16-26 Ag/mL) followed by recovery with microtubule assembly returning to normal levels (16). These effects are in contrast to those seen with other antitubulins, such as podophyllotoxin and vinblastine (17–21), and produce cell cycle arrest in the G2 and M cell cycle phases. At concentrations z50 Amol/L (32 Ag/mL), tasidotin inhibits the extent of microtubule assembly, which is also an atypical finding for antitubulins. Finally, in addition to its potentially enhanced therapeutic window, tasidotin is orally bioavailable (22). Cancer Therapy: Clinical Authors’Affiliations: Tyler Cancer Center, Tyler, Texas; Dana-Farber Cancer Institute, Massachusetts General Hospital, Boston, Massachusetts ; and Genzyme Corp., San Antonio, Texas Received1/10/05; revised 7/19/05; accepted 8/8/05. The costs of publication of this article were defrayed in part by the payment of page charges.This article must therefore be hereby marked advertisement in accordance with18 U.S.C. Section1734 solely to indicate this fact. Requests for reprints: Peter L. Bonate, Genzyme Corp., 4545 Horizon Hill Boulevard, San Antonio,TX 78229. Phone: 210-949-6662; Fax: 210-949-6219; E-mail: [email protected]. F2005 American Association for Cancer Research. doi:10.1158/1078-0432.CCR-05-0058 www.aacrjournals.org Clin Cancer Res 2005;11(21) November1, 2005 7825 Research. on April 15, 2017. © 2005 American Association for Cancer clincancerres.aacrjournals.org Downloaded from The cytotoxic potential of tasidotin was evaluated in the National Cancer Institute tumor cell line screen (including melanoma, non–small cell lung cancer, prostate, breast, colon, central nervous system, and leukemia cell lines); GI50 values ranged from <10 nmol/L (6 ng/mL) to >1 mmol/L (0.6 mg/mL; ref. 23). Complete responses were documented in early-stage and late-stage breast carcinoma (MX-1), melanoma (LOX), and prostate (PC-3) cancer mouse models, and survival time was significantly increased in the P388 murine leukemia model. In the MX-1 model, tasidotin induced tumor growth delays ranging from 20 to >90 days compared with 15 days for paclitaxel (23). Of note, repetitive dosing schedules of tasidotin showed superior activity compared with single dosing schedules (24). Toxicology studies in rats and dogs showed that tasidotin principally affects the bone marrow, gonads, lymphoid tissues, and kidney. Administration of tasidotin i.v. in dogs resulted in emesis, diarrhea, and neutropenia. Neutropenia was maximal at days 6 to 13, and recovery occurred during days 15 to 29. Mesangioproliferative glomerulopathy was only observed in rats treated at the highest dose of tasidotin (30 mg/kg i.v. 5 days). When administered to mongrel dogs, tasidotin induced dose-dependent decreases in cardiac output and increases in coronary, femoral, and total peripheral vascular resistance that were 10-fold less than cemadotin (24). After both i.v. and oral administration of tasidotin in dogs, the plasma concentrations showed a linear increase and a half-life of f2.3 hours (22). The mean bioavailability was 23%. Studies with [C]tasidotin using a single i.v. dose of tasidotin in rats revealed that 55% to 60.5% of drug was recovered in the feces, whereas 22.8% to 30.6% was excreted renally (internal data). Given its novel mechanism of action, impressive preclinical antineoplastic activity, favorable preclinical pharmacologic properties, and reduction in toxicities over previous dolastatins, a phase I trial of tasidotin given i.v. to patients with advanced solid tumors refractory to standard treatment was conducted. To mimic the repetitive dosing schedules associated with superior antitumor activity in the preclinical studies, tasidotin was administered i.v. over 30 minutes on days 1, 3, and 5 every 3 weeks. The principal objectives of this study were to determine the maximum tolerated dose (MTD) of tasidotin administered i.v. on this schedule, determine the toxicities of tasidotin, characterize the pharmacokinetic behavior of tasidotin, seek preliminary evidence of antitumor activity, and recommend a dose of tasidotin for phase II studies. Patients and Methods Eligibility. Patients with histologically documented, advanced solid tumors refractory to conventional therapy, or for whom no effective therapy existed, were candidates. Eligibility criteria also required the following: age, z18 years; Eastern Cooperative Oncology Group performance status, V2; life expectancy, z12 weeks; no prior chemotherapy, radiation therapy, or major surgical procedures within 4 weeks of study entry (6 weeks for mitomycin C or nitrosourea); adequate hematopoietic [absolute neutrophil count (ANC), z1,500/AL; platelet count, z100,000/AL; and hemoglobin, z9.0 g/d], hepatic (total bilirubin, <2.0 mg/dL; aspartate amino transaminase and alanine amino transaminase levels, V2 upper limit of normal or <5 upper limit of normal for patients with liver metastases), metabolic (calcium within institutional limits of normal), and renal (serum creatinine, V1.5 mg/dL or creatinine clearance, z60 mL/min; ref. 25) variables; prior radiotherapy to <25% of bone marrow reserves; no active infection or coexisting medical problems of sufficient severity to limit compliance with the study; no apparent central nervous system metastases; no prior stem cell or bone marrow transplantation; and no cardiac functional capacity of class III or IV, using the New York Heart Association Classification. All patients gave written informed consent in accordance with federal and local institutional guidelines before treatment. Dosage and drug administration. The starting dose of tasidotin was 3.9 mg/m/d (one tenth of the MTD in rat studies ref. 24) administered i.v. over 30 minutes on days 1, 3, and 5 of each 21-day treatment course. Successive cohorts of patients were dose escalated per a modified Fibonacci scheme from the initial dose of 3.9 to 7.8, 13.0, 19.5, 25.9, 34.4, and 45.7 mg/m. Dose reduction to the previous dose level was permitted for patients experiencing dose-limiting toxicity (DLT); however, a maximum of two dose reductions per patient was permitted. The MTD, or recommended phase II dose, was defined as the highest dose level that induced DLT in less than two of six new patients treated in the first course. At least three new patients were treated at each escalated dose level, except at the MTD where the cohort was expanded to 11 patients to gain greater safety data at that dose level. DLT was defined during the first course for determination of MTD as follows: ANC, <500/A
Radius-mass scaling laws for celestial bodies
In this letter we establish a connection between two-exponent radius-mass power laws for cosmic objects and previously proposed two-exponent Regge-like spin-mass relations. A new, simplest method for establishing the coordinates of Chandrasekhar and Eddington points is proposed.
Benchmarking Java application using JNI and native C application on Android
Android is one of the most widely spread mobile platform. Many people are using Android mobile devices and many developers are creating Android applications at this moment. When developing applications, the differences in performance between Java and C/C++ are a well-known issue. Because this is also true in Android, many Android application developers prefer to use the Android NDK along with Java compared to using only Java language. However, there are certain performance gaps in not only using different programming languages but also in using glibc of a native cross-compiler and bionic libc of the Android NDK. In this paper, we show the difference in performance between Android applications compiled by using a native cross-compiler for ARM and a native shared library through the JNI of the Android NDK. We used Mibench, a representative embedded benchmark suite that can be used freely with no restrictions. As a result, it was found that using the native shared library through the JNI of the Android NDK is faster compared to using the native cross-compiler for ARM in five of six cases.
Minimizing the Communication Cost of Aggregation in Publish/Subscribe Systems
Modern applications for distributed publish/subscribe systems often require stream aggregation capabilities along with rich data filtering. When compared to other distributed systems, aggregation in pub/sub differentiates itself as a complex problem which involves dynamic dissemination paths that are difficult to predict and optimize for a priori, temporal fluctuations in publication rates, and the mixed presence of aggregated and non-aggregated workloads. In this paper, we propose a formalization for the problem of minimizing communication traffic in the context of aggregation in pub/sub. We present a solution to this minimization problem by using a reduction to the well-known problem of minimum vertex cover in a bipartite graph. This solution is optimal under the strong assumption of complete knowledge of future publications. We call the resulting algorithm "Aggregation Decision, Optimal with Complete Knowledge" (ADOCK). We also show that under a dynamic setting without full knowledge, ADOCK can still be applied to produce a low, yet not necessarily optimal, communication cost. We also devise a computationally cheaper dynamic approach called "Aggregation Decision with Weighted Publication" (WAD). We compare our solutions experimentally using two real datasets and explore the trade-offs with respect to communication and computation costs.
Satellite cells, connective tissue fibroblasts and their interactions are crucial for muscle regeneration.
Muscle regeneration requires the coordinated interaction of multiple cell types. Satellite cells have been implicated as the primary stem cell responsible for regenerating muscle, yet the necessity of these cells for regeneration has not been tested. Connective tissue fibroblasts also are likely to play a role in regeneration, as connective tissue fibrosis is a hallmark of regenerating muscle. However, the lack of molecular markers for these fibroblasts has precluded an investigation of their role. Using Tcf4, a newly identified fibroblast marker, and Pax7, a satellite cell marker, we found that after injury satellite cells and fibroblasts rapidly proliferate in close proximity to one another. To test the role of satellite cells and fibroblasts in muscle regeneration in vivo, we created Pax7(CreERT2) and Tcf4(CreERT2) mice and crossed these to R26R(DTA) mice to genetically ablate satellite cells and fibroblasts. Ablation of satellite cells resulted in a complete loss of regenerated muscle, as well as misregulation of fibroblasts and a dramatic increase in connective tissue. Ablation of fibroblasts altered the dynamics of satellite cells, leading to premature satellite cell differentiation, depletion of the early pool of satellite cells, and smaller regenerated myofibers. Thus, we provide direct, genetic evidence that satellite cells are required for muscle regeneration and also identify resident fibroblasts as a novel and vital component of the niche regulating satellite cell expansion during regeneration. Furthermore, we demonstrate that reciprocal interactions between fibroblasts and satellite cells contribute significantly to efficient, effective muscle regeneration.
A Simple Algorithm for Automatic Generation of Polyphonic Piano Fingerings
We present a novel method for assigning fingers to notes in a polyphonic piano score. Such a mapping (called a “fingering”) is of great use to performers. To accommodate performers’ unique hand sha our method relies on a simple, user function. We use dynamic programming to search the space of all possible fingerings for the optimal fi ngering under this cost function. Despite the simplicity of the algorithm we achieve re asonable and useful results.
Incomplete Multisource Transfer Learning
Transfer learning is generally exploited to adapt well-established source knowledge for learning tasks in weakly labeled or unlabeled target domain. Nowadays, it is common to see multiple sources available for knowledge transfer, each of which, however, may not include complete classes information of the target domain. Naively merging multiple sources together would lead to inferior results due to the large divergence among multiple sources. In this paper, we attempt to utilize incomplete multiple sources for effective knowledge transfer to facilitate the learning task in target domain. To this end, we propose an incomplete multisource transfer learning through two directional knowledge transfer, i.e., cross-domain transfer from each source to target, and cross-source transfer. In particular, in cross-domain direction, we deploy latent low-rank transfer learning guided by iterative structure learning to transfer knowledge from each single source to target domain. This practice reinforces to compensate for any missing data in each source by the complete target data. While in cross-source direction, unsupervised manifold regularizer and effective multisource alignment are explored to jointly compensate for missing data from one portion of source to another. In this way, both marginal and conditional distribution discrepancy in two directions would be mitigated. Experimental results on standard cross-domain benchmarks and synthetic data sets demonstrate the effectiveness of our proposed model in knowledge transfer from incomplete multiple sources.
Adefovir dipivoxil is effective for the treatment of cirrhotic patients with lamivudine failure.
BACKGROUND/AIMS Data on the efficacy of adefovir dipivoxil (ADV) in elderly and cirrhotic patients with lamivudine-resistant (LAM-R) chronic hepatitis B are scarce. This retrospective cohort study evaluated the safety and efficacy of ADV in this specific patient population. METHODS Sixty-eight cirrhotic LAM-R patients, of whom 19 (27.9%) were elderly (>or=65 years of age) and nine had severe disease (two post-orthotopic liver transplantation, four pre-orthotopic liver transplantation and three decompensated), with hepatitis B virus (HBV) infection received ADV. Virological and biochemical responses to the addition of ADV were analysed. RESULTS At inclusion, all patients were receiving LAM; ADV was added. 75.4% of patients received a combination of LAM and ADV throughout this study for a median treatment duration of 12.6 months; the remainder received ADV with an overlap with LAM treatment for a median duration of 7.9 months. At the end of follow-up, 41.2% of patients had undetectable HBV DNA (<or=2000 copies/ml) with a median reduction of 3.4 log(10) copies/ml. Time to reach undetectable HBV DNA was dependent on baseline alanine aminotransferase (ALT) levels and HBeAg status. Normalization of serum ALT levels was observed in 55.2% (32/58) of patients. In patients who were HBeAg positive at baseline, HBeAg loss and seroconversion occurred in 23% (9/39) and 10% (4/39) respectively. No resistance mutations and no significant side effects were observed during the study period. CONCLUSION Adefovir dipivoxil provides effective and safe treatment in cirrhotic and elderly patients who failed LAM therapy.
Are Cars Just 3D Boxes? Jointly Estimating the 3D Shape of Multiple Objects
Current systems for scene understanding typically represent objects as 2D or 3D bounding boxes. While these representations have proven robust in a variety of applications, they provide only coarse approximations to the true 2D and 3D extent of objects. As a result, object-object interactions, such as occlusions or ground-plane contact, can be represented only superficially. In this paper, we approach the problem of scene understanding from the perspective of 3D shape modeling, and design a 3D scene representation that reasons jointly about the 3D shape of multiple objects. This representation allows to express 3D geometry and occlusion on the fine detail level of individual vertices of 3D wireframe models, and makes it possible to treat dependencies between objects, such as occlusion reasoning, in a deterministic way. In our experiments, we demonstrate the benefit of jointly estimating the 3D shape of multiple objects in a scene over working with coarse boxes, on the recently proposed KITTI dataset of realistic street scenes.
Scheduling Parallel DAG Jobs Online to Minimize Average Flow Time
In this work, we study the problem of scheduling parallelizable jobs online with an objective of minimizing average flow time. Each parallel job is modeled as a DAG where each node is a sequential task and each edge represents dependence between tasks. Previous work has focused on a model of parallelizability known as the arbitrary speed-up curves setting where a scalable algorithm is known. However, the DAG model is more widely used by practitioners, since many jobs generated from parallel programming languages and libraries can be represented in this model. However, little is known for this model in the online setting with multiple jobs. The DAG model and the speed-up curve models are incomparable and algorithmic results from one do not immediately imply results for the other. Previous work has left open the question of whether an online algorithm can be O(1)-competitive with O(1)-speed for average flow time in the DAG setting. In this work, we answer this question positively by giving a scalable algorithm which is (1 + ǫ)-speed O( 1 ǫ )-competitive for any ǫ > 0. We further introduce the first greedy algorithm for scheduling parallelizable jobs — our algorithm is a generalization of the shortest jobs first algorithm. Greedy algorithms are among the most useful in practice due to their simplicity. We show that this algorithm is (2 + ǫ)-speed O( 1 ǫ )competitive for any ǫ > 0. ∗Department of Computer Science and Engineering, Washington University in St. Louis, 1 Brookings Drive, St. Louis, MO 63130. {kunal, li.jing, kefulu, bmoseley}@wustl.edu. B. Moseley and K. Lu work was supported in part by a Google Research Award and a Yahoo Research Award. K. Agrawal and J. Li were supported in part by NSF grants CCF-1150036 and CCF-1340571.
Mining communities and their relationships in blogs: A study of online hate groups
Blogs, often treated as the equivalence of online personal diaries, have become one of the fastest growing types of Web-based media. Everyone is free to express their opinions and emotions very easily through blogs. In the blogosphere, many communities have emerged, which include hate groups and racists that are trying to share their ideology, express their views, or recruit new group members. It is important to analyze these virtual communities, defined based on membership and subscription linkages, in order to monitor for activities that are potentially harmful to society. While many Web mining and network analysis techniques have been used to analyze the content and structure of the Web sites of hate groups on the Internet, these techniques have not been applied to the study of hate groups in blogs. To address this issue, we have proposed a semi-automated approach in this research. The proposed approach consists of four modules, namely blog spider, information extraction, network analysis, and visualization. We applied this approach to identify and analyze a selected set of 28 anti-Blacks hate groups (820 bloggers) on Xanga, one of the most popular blog hosting sites. Our analysis results revealed some interesting demographical and topological characteristics in these groups, and identified at least two large communities on top of the smaller ones. The study also demonstrated the feasibility in applying the proposed approach in the study of hate groups and other related communities in blogs. r 2006 Elsevier Ltd. All rights reserved.
Quadrotor Using Minimal Sensing For Autonomous Indoor Flight
This paper presents a Miniature Aerial Vehicle (MAV) capable of handsoff autonomous operation within indoor environments. Our prototype is a Quadrotor weighing approximately 600g, with a diameter of 550mm, which carries the necessary electronics for stability control, altitude control, collision avoidance and anti-drift control. This MAV is equipped with three rate gyroscopes, three accelerometers, one ultrasonic sensor, four infrared sensors, a high-speed motor controller and a flight computer. Autonomous flight tests have been carried out in a 7x6-m room.
Advances in paediatrics in 2016: current practices and challenges in allergy, autoimmune diseases, cardiology, endocrinology, gastroenterology, infectious diseases, neonatology, nephrology, neurology, nutrition, pulmonology
This review reports main progresses in various pediatric issues published in Italian Journal of Pediatrics and in international journals in 2016. New insights in clinical features or complications of several disorders may be useful for our better understanding. They comprise severe asthma, changing features of lupus erythematosus from birth to adolescence, celiac disease, functional gastrointestinal disorders, Moebius syndrome, recurrent pneumonia. Risk factors for congenital heart defects, Kawasaki disease have been widely investigated. New diagnostic tools are available for ascertaining brucellosis, celiac disease and viral infections. The usefulness of aCGH as first-tier test is confirmed in patients with neurodevelopmental disorders. Novel information have been provided on the safety of milk for infants. Recent advances in the treatment of common disorders, including neonatal respiratory distress syndrome, hypo-glycemia in newborns, atopic dermatitis, constipation, cyclic vomiting syndrome, nephrotic syndrome, diabetes mellitus, regurgitation, short stature, secretions in children with cerebral palsy have been reported. Antipyretics treatment has been updated by national guidelines and studies have excluded side effects (e.g. asthma risk during acetaminophen therapy). Vaccinations are a painful event and several options are reported to prevent this pain. Adverse effects due to metabolic abnormalities are reported for second generation antipsychotic drugs.
Linking allochthonous dissolved organic matter and boreal lake sediment carbon sequestration: The role of light-mediated flocculation
We measured flocculation of dissolved organic carbon (DOC) in the water from a humic lake (DOC 5 14.9 mg C L21) and from an adjacent mire (DOC 5 25.7 mg C L21), in in situ enclosure experiments with different light regimes. Light stimulated the formation of organic particles in both waters, and organic particle formation was observed at all incubation depths, even in the dark controls. Production of phytoplankton biomass was negligible, and allochthonous DOC was the most important precursor of the sinking particles. 8–22% and 25– 60% of the loss of DOC in lake and mire water, respectively, could be accounted for by flocculation. Depthintegrated flocculation based on the enclosure experiments was 14.7 mg C m22 d21. Lake-water DOC concentration and water color has been increasing during the last decade, and sediment trap studies show that gross sedimentation of organic carbon also increased. Thus flocculation of allochthonous DOC, stimulated by light, constitutes a pathway for the sequestration of carbon in lake sediments. Inland waters receive large amounts of organic carbon from their watersheds, but only about half of the organic carbon reaches the sea (Algesten et al. 2003; Cole et al. 2007). Most of the loss occurs in lakes (Algesten et al. 2003; Cole et al. 2007), primarily because of mineralization into CO2 followed by evasion to the atmosphere (Kling et al. 1991; Cole et al. 1994) and retention in sediments (Molot and Dillon 1997). Lake sediments are a considerable sink of organic carbon in boreal ecosystems. A study in boreal Finland suggests that carbon sequestration at the landscape level is greater in lake sediments than in forests and terrestrial soils (Kortelainen et al. 2004). Dissolved organic carbon (DOC) is the dominating fraction of organic carbon in most lake waters (Wetzel 2001), and in the unproductive forest lakes with high loads of allochthonous organic carbon typical of the boreal zone, particulate organic carbon (POC) generally constitutes less than 3–10% of the total carbon (Wetzel 2001; Kortelainen et al. 2006). Because most of the organic carbon is dissolved, transformation of DOC into a gravitoidal state, i.e., coagulation and flocculation into sinking POC, appears to be an important prerequisite for allochthonous carbon sequestration in lake sediments. However, the extent and mechanisms of such transformation processes are currently not well understood. Solar radiation, especially in the ultraviolet (UV) range, has a multitude of effects on the organic matter in aquatic systems, such as changes in structure, molecular weight, and optical properties (Bertilsson and Tranvik 2000). Solar radiation induces cleavage of high-molecular-weight DOC into a variety of photoproducts and inorganic carbon (Mopper et al. 1991; Moran and Zepp 1997; Bertilsson and Tranvik 2000), and increasing bioavailability of dissolved organic matter (DOM) to bacteria (Lindell et al. 1995; Wetzel et al. 1995). It has also been indicated that DOC can be transformed to particles by photochemical reactions. Irradiation of DOC has been reported to stimulate particle formation, involving iron as catalyst (Gao and Zepp 1998). Similarly, intense irradiation of humic water cleaved humic molecules into fragments, but also within hours resulted in a precipitate of brownish particles (Backlund 1992). Based on these observations we hypothesize that light-mediated flocculation of DOC into POC relocates organic carbon from the water column to the sediments. We conducted enclosure experiments and sediment trap campaigns, demonstrating flocculation of DOC into POC under natural conditions, and showing that sunlight plays an important role in the transformation from a dissolved or 1 Corresponding author ([email protected]).
Concepts in a Probabilistic Language of Thought
Knowledge organizes our understanding of the world, determining what we expect given what we have already seen. Our predictive representations have two key properties: they are productive, and they are graded. Productive generalization is possible because our knowledge decomposes into concepts—elements of knowledge that are combined and recombined to describe particular situations. Gradedness is the observable effect of accounting for uncertainty—our knowledge encodes degrees of belief that lead to graded probabilistic predictions. To put this a different way, concepts form a combinatorial system that enables description of many different situations; each such situation specifies a distribution over what we expect to see in the world, given what we have seen. We may think of this system as a probabilistic language of thought (PLoT) in which representations are built from language-like composition of concepts and the content of those representations is a probability distribution on world states. The purpose of this chapter is to formalize these ideas in computational terms, to illustrate key properties of the PLoT approach with a concrete example, and to draw connections with other views of conceptual structure. People are remarkably flexible at understanding new situations, guessing at unobserved properties or events, and making predictions on the basis of sparse evidence combined with general background knowledge. Consider the game of tug-of-war: two teams matching their strength by pulling on either side of a rope. If a team containing the first author (NG) loses to a team containing the third author (TG), that might provide weak evidence that TG is the stronger of the two. If these teams contain only two members each, we might believe more in TG’s greater strength than if the teams contain eight members each. If TG beats NG in a one-on-one tug-of-war, and NG goes on to beat three other individuals in similar one-on-one contests, we might believe that TG is not only stronger than NG but strong in an absolute sense, relative to the general population, even though we have only directly observed TG participating in a single match. However, if we later found out that NG did not try very hard in his match against TG, but did try hard in his later matches, our convictions about TG’s strength
Pharmacokinetics and safety of a 7-day administration of intravenous itraconazole followed by a 14-day administration of itraconazole oral solution in patients with hematologic malignancy.
The pharmacokinetics and safety of an intravenous hydroxypropyl-beta-cyclodextrin solution of itraconazole administered for 7 days followed by itraconazole oral solution administered at 200 mg once or twice daily for 14 days were assessed in 17 patients with hematologic malignancies. Steady-state plasma itraconazole concentrations were reached by 48 h after the start of intravenous treatment. The mean trough plasma itraconazole concentration at the end of the intravenous treatment was 0.54 +/- 0.20 microg/ml. This concentration was not maintained during once-daily oral treatment but increased further in the twice-daily treatment group, with a trough itraconazole concentration of 1.12 +/- 0.73 microg/ml at the end of oral treatment. As expected in the patient population studied, all patients experienced some adverse events (mainly gastrointestinal). Biochemical and hematologic abnormalities were frequent, but no consistent changes occurred. In conclusion, 7 days of intravenous treatment followed by 14 days of twice-daily oral treatment with itraconazole solution enables plasma itraconazole concentrations of at least 0.5 microg/ml to be reached rapidly and to be maintained. The regimen is well tolerated and has a good safety profile.
Cranial tip suture in nasal tip contouring.
The creation of both a functionally and aesthetically pleasing nasal tip contour is demanding and depends on various different parameters. Typically, procedures are performed with emphasis on narrowing the nasal tip structure. Excisional techniques alone inevitably lead to a reduction in skeletal support and are often prone to unpredictable deformities. But also long-term results of classical suture techniques have shown unfavorable outcomes. Particularly, pinching of the ala and a displacement of the caudal margin of the lateral crus below the cephalic margin belong to this category. A characteristic loss of structural continuity between the domes and the alar lobule and an undesirable shadowing occur. These effects lead to an unnatural appearance of the nasal tip and frequently to impaired nasal breathing. Stability and configuration of the alar cartilages alone do not allow for an adequate evaluation of the nasal tip contour. Rather a three-dimensional approach is required to describe all nasal tip structures. Especially, the rotational angle of the alar surface as well as the longitudinal axis of the lateral crus in relation to cranial septum should be considered in the three-dimensional analysis. Taking the various parameters into account, the authors present new aspects in nasal tip surgery which contribute to the creation of a functionally and aesthetically pleasing as well as durable nasal tip contour.
Combination vinorelbine and capecitabine for metastatic breast cancer using a non-body surface area dosing scheme
Purpose: This study sought to determine the maximum tolerated dose of flat-dosed vinorelbine in combination with capecitabine in patients with metastatic breast cancer. At the time of study initiation, it was anticipated that vinorelbine would be developed as an oral capsule. A flat dosing scheme of both drugs was used to facilitate development of the oral regimen, and because neither drug’s clearance is associated with body surface area (BSA), pharmacokinetic and pharmacogenetic endpoints were explored. Experimental Design: Capecitabine was administered orally at 3,000 mg/day on days 1–14. The starting dose of vinorelbine was 20 mg intravenously on days 1 and 8 of a 21-day cycle. The vinorelbine dose was escalated until dose limiting toxicity (DLT). Vinorelbine pharmacokinetics were measured after the first dose. Patients underwent genotype analysis for polymorphisms in the CYP3A5 gene, and the erythromycin breath test (ERMBT), a phenotypic test of CYP3A enzyme activity. Results: Twenty five eligible patients were enrolled. Hematologic DLT was seen at the 50 and 45 mg vinorelbine doses; thus the recommended dose is 40 mg on days 1 and 8. Response rate was 30%, and disease stabilization rate was 64% (all dose levels included). Vinorelbine clearance was not associated with ERMBT, BSA, or age. CYP3A5 genotype in this small sample did not have an obvious relationship to clearance or toxicity. Conclusions: A non-BSA based dosing scheme of capecitabine and vinorelbine is safe and efficacious. BSA did not affect vinorelbine clearance. We recommend future studies with capecitabine and/or vinorelbine to compare the safety and efficacy of flat dosed versus BSA-dosed treatment.
Growth rates of ovarian follicles during natural menstrual cycles, oral contraception cycles, and ovarian stimulation cycles.
OBJECTIVE To compare growth rates of ovarian follicles during natural menstrual cycles, oral contraception (OC) cycles, and ovarian stimulation cycles using standardized techniques. DESIGN Prospective, comparative, observational, longitudinal study. SETTING Healthy volunteers in research trials and infertility patients undergoing treatment at an academic institution. PATIENT(S) Women were evaluated during natural cycles (n = 50), OC cycles (n = 71), and ovarian stimulation cycles (n = 131). INTERVENTION(S) Serial transvaginal ultrasonography was performed to measure follicle diameter. Day-to-day growth and regression profiles of individual follicles were determined. Mean growth rates were calculated for ovulatory follicles. Mean growth and regression rates were calculated for anovulatory follicles. MAIN OUTCOME MEASURE(S) Follicle growth rate (in millimeters per day). RESULT(S) Mean follicular growth rate was greater during ovarian stimulation cycles (1.69 +/- 0.03 mm/day) compared to natural (1.42 +/- 0.05 mm/day) and OC cycles (1.36 +/- 0.08 mm/day). The interval from dominant follicle selection to ovulation was shorter during stimulation cycles (5.08 +/- 0.07 days) compared to natural cycles (7.16 +/- 0.23 days). CONCLUSION(S) Follicles grew faster during ovarian stimulation therapy compared to natural cycles or OC cycles. Greater follicular growth rates in stimulation cycles were associated with shorter intervals from selection to ovulation. The biologic effects of increased follicular growth rates and shorter intervals to ovulation on oocyte competence in women undergoing assisted reproduction remain to be determined.
Identification and Characterization of Receptor-Specific Peptides for siRNA Delivery
Tumor-targeted delivery of siRNA remains a major barrier in fully realizing the therapeutic potential of RNA interference. While cell-penetrating peptides (CPP) are promising siRNA carrier candidates, they are universal internalizers that lack cell-type specificity. Herein, we design and screen a library of tandem tumor-targeting and cell-penetrating peptides that condense siRNA into stable nanocomplexes for cell type-specific siRNA delivery. Through physiochemical and biological characterization, we identify a subset of the nanocomplex library of that are taken up by cells via endocytosis, trigger endosomal escape and unpacking of the carrier, and ultimately deliver siRNA to the cytosol in a receptor-specific fashion. To better understand the structure-activity relationships that govern receptor-specific siRNA delivery, we employ computational regression analysis and identify a set of key convergent structural properties, namely the valence of the targeting ligand and the charge of the peptide, that help transform ubiquitously internalizing cell-penetrating peptides into cell type-specific siRNA delivery systems.
Electrodermal responses: what happens in the brain.
Electrodermal activity (EDA) is now the preferred term for changes in electrical conductance of the skin, including phasic changes that have been referred to as galvanic skin responses (GSR), that result from sympathetic neuronal activity. EDA is a sensitive psychophysiological index of changes in autonomic sympathetic arousal that are integrated with emotional and cognitive states. Until recently there was little direct knowledge of brain mechanisms governing generation and control of EDA in humans. However, studies of patients with discrete brain lesions and, more recently, functional imaging techniques have clarified the contribution of brain regions implicated in emotion, attention, and cognition to peripheral EDA responses. Moreover, such studies enable an understanding of mechanisms by which states of bodily arousal, indexed by EDA, influence cognition and bias motivational behavior.
Comparison of the efficacy and safety of rosuvastatin versus atorvastatin, simvastatin, and pravastatin across doses (STELLAR* Trial).
The primary objective of this 6-week, parallel-group, open-label, randomized, multicenter trial was to compare rosuvastatin with atorvastatin, pravastatin, and simvastatin across dose ranges for reduction of low-density lipoprotein (LDL) cholesterol. Secondary objectives included comparing rosuvastatin with comparators for other lipid modifications and achievement of National Cholesterol Education Program Adult Treatment Panel III and Joint European Task Force LDL cholesterol goals. After a dietary lead-in period, 2,431 adults with hypercholesterolemia (LDL cholesterol > or =160 and <250 mg/dl; triglycerides <400 mg/dl) were randomized to treatment with rosuvastatin 10, 20, 40, or 80 mg; atorvastatin 10, 20, 40, or 80 mg; simvastatin 10, 20, 40, or 80 mg; or pravastatin 10, 20, or 40 mg. At 6 weeks, across-dose analyses showed that rosuvastatin 10 to 80 mg reduced LDL cholesterol by a mean of 8.2% more than atorvastatin 10 to 80 mg, 26% more than pravastatin 10 to 40 mg, and 12% to 18% more than simvastatin 10 to 80 mg (all p <0.001). Mean percent changes in high-density lipoprotein cholesterol in the rosuvastatin groups were +7.7% to +9.6% compared with +2.1% to +6.8% in all other groups. Across dose ranges, rosuvastatin reduced total cholesterol significantly more (p <0.001) than all comparators and triglycerides significantly more (p <0.001) than simvastatin and pravastatin. Adult Treatment Panel III LDL cholesterol goals were achieved by 82% to 89% of patients treated with rosuvastatin 10 to 40 mg compared with 69% to 85% of patients treated with atorvastatin 10 to 80 mg; the European LDL cholesterol goal of <3.0 mmol/L was achieved by 79% to 92% in rosuvastatin groups compared with 52% to 81% in atorvastatin groups. Drug tolerability was similar across treatments.
Stream reasoning : A survey and outlook A summary of ten years of research and a vision for the next decade
Stream reasoning studies the application of inference techniques to data characterised by being highly dynamic. It can find application in several settings, from Smart Cities to Industry 4.0, from Internet of Things to Social Media analytics. This year stream reasoning turns ten, and in this article we analyse its growth. In the first part, we trace the main results obtained so far, by presenting the most prominent studies. We start by an overview of the most relevant studies developed in the context of semantic web, and then we extend the analysis to include contributions from adjacent areas, such as database and artificial intelligence. Looking at the past is useful to prepare for the future: in the second part, we present a set of open challenges and issues that stream reasoning will face in the next future.
Looking at Humans in the Age of Self-Driving and Highly Automated Vehicles
This paper highlights the role of humans in the next generation of driver assistance and intelligent vehicles. Understanding, modeling, and predicting human agents are discussed in three domains where humans and highly automated or self-driving vehicles interact: 1) inside the vehicle cabin, 2) around the vehicle, and 3) inside surrounding vehicles. Efforts within each domain, integrative frameworks across domains, and scientific tools required for future developments are discussed to provide a human-centered perspective on research in intelligent vehicles.
Risk and maintenance factors for eating pathology: a meta-analytic review.
This meta-analytic review of prospective and experimental studies reveals that several accepted risk factors for eating pathology have not received empirical support (e.g., sexual abuse) or have received contradictory support (e.g.. dieting). There was consistent support for less-accepted risk factors(e.g., thin-ideal internalization) as well as emerging evidence for variables that potentiate and mitigate the effects of risk factors(e.g., social support) and factors that predict eating pathology maintenance(e.g., negative affect). In addition, certain multivariate etiologic and maintenance models received preliminary support. However, the predictive power of individual risk and maintenance factors was limited, suggesting it will be important to search for additional risk and maintenance factors, develop more comprehensive multivariate models, and address methodological limitations that attenuate effects.
Weekday on-weekend off oral capecitabine: a phase I study of a continuous schedule better simulating protracted fluoropyrimidine therapy
Although protracted intravenous 5-fluorouracil is superior to bolus regimens in terms of tumour exposure to the drug during DNA synthesis as well as activity and safety, the oral fluoropyrimidine capecitabine is administered intermittently. In this phase I study, we investigated an alternative, dose-intense continuous regimen. Oral capecitabine was administered twice daily continuously with weekend breaks, in patients with advanced solid tumours refractory to standard therapy. Dose escalation proceeded from 1,331 to 2,510 mg/m2 daily. Dose limiting toxicity (DLT) consisted of any grade-3 or 4 adverse event except for alopecia and skin toxicity resolving within 7 days. Twenty-five heavily pretreated patients participated in the study. No DLT occurred in the first four cohorts. Two out of four patients developed grade III diarrhoea in the fourth week of capecitabine at 2,510 mg/m2 (DLT). The most common toxic episodes during all cycles of treatment were grade 1–2 fatigue, skin erythema, abdominal cramps, nausea, constipation and neutropenia. Disease regression was seen in three and stabilisation with clinical benefit in ten patients (clinical benefit response 54%). Pharmacokinetic studies of capecitabine and metabolites in four patients at 2,250 mg/m2 daily showed rapid absorption, short plasma half-lives with the exception of FBAL and absence of accumulation or conversion saturation during the course of therapy. At this dose, administered dose intensity in eight patients was 99.3% of the planned one. Weekday on-weekend off capecitabine maximizes cytotoxic impact on tumour cells during S-phase by safely simulating protracted fluoropyrimidine therapy at a recommended dose (2,250 mg/m2) close to that of the intermittent schedule and clearly higher than the continuous one of 1,331 mg/m2.
WebTables: exploring the power of tables on the web
The World-Wide Web consists of a huge number of unstructured documents, but it also contains structured data in the form of HTML tables. We extracted 14.1 billion HTML tables from Google’s general-purpose web crawl, and used statistical classification techniques to find the estimated 154M that contain high-quality relational data. Because each relational table has its own “schema” of labeled and typed columns, each such table can be considered a small structured database. The resulting corpus of databases is larger than any other corpus we are aware of, by at least five orders of magnitude. We describe the WebTables system to explore two fundamental questions about this collection of databases. First, what are effective techniques for searching for structured data at search-engine scales? Second, what additional power can be derived by analyzing such a huge corpus? First, we develop new techniques for keyword search over a corpus of tables, and show that they can achieve substantially higher relevance than solutions based on a traditional search engine. Second, we introduce a new object derived from the database corpus: the attribute correlation statistics database (AcsDB) that records corpus-wide statistics on cooccurrences of schema elements. In addition to improving search relevance, the AcsDB makes possible several novel applications: schema auto-complete, which helps a database designer to choose schema elements; attribute synonym finding, which automatically computes attribute synonym pairs for schema matching; and join-graph traversal, which allows a user to navigate between extracted schemas using automatically-generated join links. ∗Work done while all authors were at Google, Inc. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commer cial advantage, the VLDB copyright notice and the title of the publication an d its date appear, and notice is given that copying is by permission of the Very L arge Data Base Endowment. To copy otherwise, or to republish, to post o n servers or to redistribute to lists, requires a fee and/or special pe rmission from the publisher, ACM. VLDB ’08 Auckland, New Zealand Copyright 2008 VLDB Endowment, ACM 000-0-00000-000-0/00/ 00.
Applying graph-based data mining concepts to the educational sphere
In this study, we discuss the possible application of the ubiquitous complex network approach for information extraction from educational data. Since a huge amount of data (which is detailed as well) is produced by the complex administration systems of educational institutes, instead of the classical statistical methods, new types of data processing techniques are required to handle it. We define several suitable network representations of students, teachers and subjects in public education and present some possible ways of how graph mining techniques can be used to get detailed information about them. Depending on the construction of the underlying graph, we examine several network models and discuss which are the most appropriate graph mining tools (like community detection and ranking and centrality measures) that can be applied on them. Lastly, we attempt to highlight the many advantages of using graph-based data mining in educational data against the classical evaluation techniques.
Feature selection for spatially enhanced LBP: application to face recognition
Block-based local binary patterns a.k.a. enhanced local binary patterns (ELBPs) have proven to be a highly discriminative descriptor for face recognition and image retrieval. Since this descriptor is mainly composed by histograms, little work (if any) has been done for selecting its relevant features (either the bins or the blocks). In this paper, we address feature selection for both the classic ELBP representation and the recently proposed color quaternionic LBP (QLBP). We introduce a filter method for the automatic weighting of attributes or blocks using an improved version of the margin-based iterative search Simba algorithm. This new improved version introduces two main modifications: (i) the hypothesis margin of a given instance is computed by taking into account the K-nearest neighboring examples within the same class as well as the K-nearest neighboring examples with a different label; (ii) the distances between samples and their nearest neighbors are computed using the weighted $$\chi ^2$$ χ 2 distance instead of the Euclidean one. This algorithm has been compared favorably with several competing feature selection algorithms including the Euclidean-based Simba as well as variance and Fisher score algorithms giving higher performances. The proposed method is useful for other descriptors that are formed by histograms. Experimental results show that the QLBP descriptor allows an improvement of the accuracy in discriminating faces compared with the ELBP. They also show that the obtained selection (attributes or blocks) can either improve recognition performance or maintain it with a significant reduction in the descriptor size.
The endocannabinoid anandamide inhibits neuronal progenitor cell differentiation through attenuation of the Rap1/B-Raf/ERK pathway.
Endocannabinoids are neuromodulators that act as retrograde synaptic messengers inhibiting the release of different neurotransmitters in cerebral areas such as hippocampus, cortex, and striatum. However, little is known about other roles of the endocannabinoid system in brain. In the present work we provide substantial evidence that the endocannabinoid anandamide (AEA) regulates neuronal differentiation both in culture and in vivo. Thus AEA, through the CB(1) receptor, inhibited cortical neuron progenitor differentiation to mature neuronal phenotype. In addition, human neural stem cell differentiation and nerve growth factor-induced PC12 cell differentiation were also inhibited by cannabinoid challenge. AEA decreased PC12 neuronal-like generation via CB(1)-mediated inhibition of sustained extracellular signal-regulated kinase (ERK) activation, which is responsible for nerve growth factor action. AEA thus inhibited TrkA-induced Rap1/B-Raf/ERK activation. Finally, immunohistochemical analyses by confocal microscopy revealed that adult neurogenesis in dentate gyrus was significantly decreased by the AEA analogue methanandamide and increased by the CB(1) antagonist SR141716. These data indicate that endocannabinoids inhibit neuronal progenitor cell differentiation through attenuation of the ERK pathway and suggest that they constitute a new physiological system involved in the regulation of neurogenesis.
Exogenous Approach to Grid Cost Allocation in Peer-to-Peer Electricity Markets
The deployment of distributed energy resources, combined with a more proactive demand side, is inducing a new paradigm in power system operation and electricity markets. Within a consumer-centric market framework, peer-to-peer approaches have gained substantial interest. Peer-to-peer markets rely on multi-bilateral direct negotiation among all players to match supply and demand, and with product differentiation. These markets can yield a complete mapping of exchanges onto the grid, hence allowing to rethink our approach to sharing costs related to usage of common infrastructure and services. We propose here to attribute such costs in a number of alternative ways that reflects different views on usage of the grid and on cost allocation, i.e., uniformly and based on the electrical distance between players. Since attribution mechanisms are defined in an exogenous manner and made transparent they eventually affect the trades of the market participants and related grid usage. The interest of our approach is illustrated on a test case using the IEEE 39 bus test system, underlying the impact of attribution mechanisms on trades and grid usage.
Reality Skins: Creating Immersive and Tactile Virtual Environments
Reality Skins enables mobile and large-scale virtual reality experiences, dynamically generated based on the user's environment. A head-mounted display (HMD) coupled with a depth camera is used to scan the user's surroundings: reconstruct geometry, infer floor plans, and detect objects and obstacles. From these elements we generate a Reality Skin, a 3D environment which replaces office or apartment walls with the corridors of a spaceship or underground tunnels, replacing chairs and desks, sofas and beds with crates and computer consoles, fungi and crumbling ancient statues. The placement of walls, furniture and objects in the Reality Skin attempts to approximate reality, such that the user can move around, and touch virtual objects with tactile feedback from real objects. Each possible reality skins world consists of objects, materials and custom scripts. Taking cues from the user's surroundings, we create a unique environment combining these building blocks, attempting to preserve the geometry and semantics of the real world.We tackle 3D environment generation as a constraint satisfaction problem, and break it into two parts: First, we use a Markov Chain Monte-Carlo optimization, over a simple 2D polygonal model, to infer the layout of the environment (the structure of the virtual world). Then, we populate the world with various objects and characters, attempting to satisfy geometric (virtual objects should align with objects in the environment), semantic (a virtual chair aligns with a real one), physical (avoid collisions, maintain stability) and other constraints. We find a discrete set of transformations for each object satisfying unary constraints, incorporate pairwise and higher-order constraints, and optimize globally using a very recent technique based on semidefinite relaxation.
Midterm Examination I: Solutions ECON 504 - Microeconomics II
Solution M and C are strictly dominated and hence cannot receive positive probability in any Nash equilibrium. Given that only L and R receive positive probability, T cannot receive positive probability either. So, in any Nash equilibrium player 1 must play B with probability one. Given that, any probability distribution over L and R is a best response for player 2. In other words, the set of Nash equilibria is given by
cGANs with Projection Discriminator
We propose a novel, projection based way to incorporate the conditional information into the discriminator of GANs that respects the role of the conditional information in the underlining probabilistic model. This approach is in contrast with most frameworks of conditional GANs used in application today, which use the conditional information by concatenating the (embedded) conditional vector to the feature vectors. With this modification, we were able to significantly improve the quality of the class conditional image generation on ILSVRC2012 (ImageNet) 1000-class image dataset from the current state-of-the-art result, and we achieved this with a single pair of a discriminator and a generator. We were also able to extend the application to super-resolution and succeeded in producing highly discriminative super-resolution images. This new structure also enabled high quality category transformation based on parametric functional transformation of conditional batch normalization layers in the generator. The code with Chainer (Tokui et al., 2015), generated images and pretrained models are available at https://github.com/pfnet-research/sngan_projection.
Volcanogenic Sedimentation in the Lesser Antilles Arc
Combination of deep-sea coring and land-based studies has provided a quantitative view of volcano genic sediments in and around the Lesser Antilles arc in the late-Quaternary. Of the total of $$527 km^{3}$$ of volcanics produced in the arc in the last $$10^{5}$$ years, over 80% has been deposited as volcanogenic sediments in the adjacent marine basins as ash-fall layers, pyroclastic debris flow deposits and volcanic sands. Over 70% of total volcanogenic sedimentation from the arc is received by the back-arc Grenada Basin west of the arc in the form of sediment gravity flows. The volumetric role of ash-fall layers is thus relatively small. Approximately 40% of the ash-fall in the Atlantic is dispersed in the sediment. The asymmetric distribution of volcanogenic sediments around the arc, with pyroclastic debris flow deposits predominating west of the arc and ash-fall layers in the Atlantic east of the arc, is attributed to the effects of prevailing high-altitude wind direction, different submarine arc slope...
Effects of supplementation of maize silage diets with extruded linseed, vitamin E and plant extracts rich in polyphenols, and morning v. evening milking on milk fatty acid profiles in Holstein and Montbéliarde cows.
The aim of this study was to evaluate the effects on dairy performance and milk fatty acid (FA) composition of (i) supplementation with extruded linseed (EL), (ii) supplementation with synthetic or natural antioxidants, namely vitamin E and plant extracts rich in polyphenols (PERP), (iii) cow breed (Holstein v. Montbéliarde) and (iv) time of milking (morning v. evening). After a 3-week pre-experimental period 24 lactating cows (12 Holstein and 12 Montbéliarde) were divided up into four groups of six cows: the first group received a daily control diet (diet C) based on maize silage. The second group received the same diet supplemented with EL (diet EL, fat level approximately 5% of dietary dry matter (DM)). The third group received the EL diet plus 375 IU/kg diet DM of vitamin E (diet ELE). The fourth group received the ELE diet plus 10 g/kg diet DM of a PERP mixture (diet ELEP). Compared with the diet C, feeding EL-rich diets led to lower concentrations of total saturated FA (SFA) and higher concentrations of stearic and oleic acids, each trans and cis isomer of 18:1 (except c12-18:1), non-conjugated isomers of 18:2, some isomers (c9t11-, c9c11- and t11t13-) of conjugated linoleic acid (CLA), and 18:3n-3. The vitamin E supplementation had no effect on milk yield, milk fat or protein percentage and only moderate effects on milk concentrations of FA (increase in 16:0, decreases in 18:0 and t6/7/8-18:1). The addition of PERP to vitamin E did not modify milk yield or composition and slightly altered milk FA composition (decrease in total saturated FA (SFA) and increase in monounsaturated FA (MUFA)). The minor effects of vitamin E may be partly linked to the fact that no milk fat depression occurred with the EL diet. During both periods the Holstein cows had higher milk production, milk fat and protein yields, and milk percentages of 4:0 and 18:3n-3, and lower percentages of odd-branched chain FA (OBCFA) than the Montbéliarde cows. During the experimental period the Holstein cows had lower percentages of total cis 18:1, and c9,c11-CLA, and higher percentages of 6:0, 8:0, t12-, t16/c14- and t13/14-18:1, and 18:2n-6 than Montbéliarde cows because of several significant interactions between breed and diet. Also, the total SFA percentage was higher for morning than for evening milkings, whereas those of MUFA, total cis 18:1, OBCFA and 18:2n-6 were lower. Extruded linseed supplementation had higher effect on milk FA composition than antioxidants, breed or time of milking.
Oceans of Crenarchaeota: a Personal History Describing This Paradigm Shift: Studies of marine archaea during the past two decades unveil the distinctive biology of these diverse and abundant microorganisms
The notion that Archaea are abundant in most natural habitats, especially the oceans, marks a fundamental sea change from a generation ago, one that met considerable resistance at first. The seeds of the idea trace to 1977, when Carl Woese at the University of Illinois and George Fox, now at the University of Houston, proposed, on the basis of painstakingly analyzed ribosomal RNA (rRNA) oligonucleotide sequences, that life stems from “three aboriginal lines of descent.” Their findings are considered among the most important in modern biology.
Accurate Linear-Time Chinese Word Segmentation via Embedding Matching
This paper proposes an embedding matching approach to Chinese word segmentation, which generalizes the traditional sequence labeling framework and takes advantage of distributed representations. The training and prediction algorithms have linear-time complexity. Based on the proposed model, a greedy segmenter is developed and evaluated on benchmark corpora. Experiments show that our greedy segmenter achieves improved results over previous neural network-based word segmenters, and its performance is competitive with state-of-the-art methods, despite its simple feature set and the absence of external resources for training.
Venlafaxine extended release vs placebo and paroxetine in social anxiety disorder.
BACKGROUND Evidence indicates that venlafaxine hydrochloride extended release (ER) effectively ameliorates anxiety symptoms. OBJECTIVES To evaluate the efficacy, safety, and tolerability of flexible-dose venlafaxine ER compared with placebo in the short-term treatment of generalized social anxiety disorder and, secondarily, to compare paroxetine with venlafaxine ER and paroxetine with placebo. DESIGN Adult outpatients with DSM-IV generalized social anxiety disorder for 6 months or longer were randomly assigned to receive venlafaxine hydrochloride ER (75-225 mg/d), paroxetine (20-50 mg/d), or placebo for 12 weeks or less at 26 centers in the United States. The primary outcome measure was the total Liebowitz Social Anxiety Scale score. Secondary measures included response (Clinical Global Impression-Improvement score, 1 or 2) rates and Clinical Global Impression-Severity of Illness and Social Phobia Inventory scores. RESULTS Of 440 patients treated, 413 (93.9%) were included in the last-observation-carried-forward efficacy analysis; of the 429 patients in the safety population, 318 (74.1%) completed the study. Mean daily doses were 201.7 mg (SD, 38.1 mg) of venlafaxine hydrochloride ER and 46.0 mg (SD, 7.9 mg) of paroxetine. Venlafaxine ER treatment was significantly superior to placebo at weeks 1 through 12 on the Liebowitz Social Anxiety Scale and Social Phobia Inventory and at week 2 and weeks 6 through 12 for Clinical Global Impression-Severity of Illness and responder status, and was significantly superior to paroxetine treatment at weeks 1 and 2 for the Social Phobia Inventory (P < .05 for all). Paroxetine treatment was significantly superior to placebo at weeks 3 through 12 on the Liebowitz Social Anxiety Scale, the Clinical Global Impression-Severity of Illness scale, and the Social Phobia Inventory, and at weeks 4 through 12 for response (P < .05 for all). Week 12 response rates were significantly greater for the venlafaxine ER and paroxetine groups (58.6% and 62.5%, respectively) vs the placebo group (36.1%) (P < .001 for both). CONCLUSION Venlafaxine ER is effective in the short-term treatment of generalized social anxiety disorder, with efficacy and tolerability comparable to paroxetine.
Association of the HLA-DRB1 epitope LA(67, 74) with rheumatoid arthritis and citrullinated vimentin binding.
OBJECTIVE Although rheumatoid arthritis (RA) has long been associated with an HLA-DRB1 shared epitope, a systematic search for other epitopes has never been conducted. In addition, the relationship between these epitopes and the binding of citrullinated autoantigens has not been investigated. We developed a program that can analyze HLA data for all possible epitopes of up to 5 amino acids and used this program to assess the shared epitope hypothesis in RA. METHODS We analyzed high-resolution data from the International Histocompatibility Working Group, which included a group of 488 patients with RA and a group of 448 racially and ethnically balanced control subjects, for all combinations of up to 5 amino acids among polymorphic HLA-DRB1 positions 8-93. Statistical significance was determined by chi-square and Fisher's exact tests, with a false discovery rate correction. RESULTS Three residues (V(11), H(13), and L(67)) were found to have the highest degree of association with RA susceptibility (P < 10(-11)), and D(70) was found to correlate best with RA resistance (P = 2 × 10(-11)). Of >2 million epitopes examined, LA(67, 74) exhibited the highest correlation with RA susceptibility (P = 2 × 10(-20); odds ratio 4.07 [95% confidence interval 3.07-5.39]). HLA alleles containing the LA(67, 74) epitope exhibited significantly greater binding to citrullinated vimentin(65-77) than did alleles containing D(70). Only 1 allele (DRB1*16:02) contained both LA(67, 74) and D(70); it bound citrullinated vimentin weakly and was not associated with RA. CONCLUSION The findings of these studies confirm the importance of HLA-DRB1 amino acids in pocket 4 for the binding of citrullinated autoantigens and susceptibility to RA.
A Modified Particle Swarm Optimizer Algorithm
This paper presented a modified particle swarm optimizer algorithm (MPSO). The aggregation degree of the particle swarm was introduced. The particles' diversity was improved through periodically monitoring aggregation degree of the particle swarm. On the later development of the PSO algorithm, it has been taken strategy of the Gaussian mutation to the best particle's position, which enhanced the particles' capacity to jump out of local minima. Several typical benchmark functions with different dimensions have been used for testing. The simulation results show that the proposed method improves the convergence precision and speed of PSO algorithm effectively.
Design of DC/DC Boost converter with FNN solar cell Maximum Power Point Tracking controller
This paper demonstrates the Maximum Power Point Tracking (MPPT) controller that uses a DC/DC Boost converter with a Fuzzy Neural Network (FNN) system. It simplifies the topology of the DC/DC boost converter model to state equations, which is easy to simulate with Matlab. Additionally, the FNN system uses an integrated Fuzzy and Neural Network (NN), which advantages include uncertainty information processing and neural network learning. After assigning a suitable structure, we adjust the membership function and assign the algorithm weighting to track the maximum power point effectively in the parameters leaning process. The simulation result has verified the system to be efficient and rapid in tracking the MPP and converting the power from solar cells into the battery bank.
Dynamic Impact of Online Word-of-Mouth and Advertising on Supply Chain Performance
Cooperative (co-op) advertising investments benefit brand goodwill and further improve supply chain performance. Meanwhile, online word-of-mouth (OWOM) can also play an important role in supply chain performance. On the basis of co-op advertising, this paper considers a single supply chain structure led by a manufacturer and examines a fundamental issue concerning the impact of OWOM on supply chain performance. Firstly, by the method of differential game, this paper analyzes the dynamic impact of OWOM and advertising on supply chain performance (i.e., brand goodwill, sales, and profits) under three different supply chain decisions (i.e., only advertising, and manufacturers with and without sharing cost of OWOM with retailers). We compare and analyze the optimal strategies of advertising and OWOM under the above different supply chain decisions. Secondly, the system dynamics model is established to reflect the dynamic impact of OWOM and advertising on supply chain performance. Finally, three supply chain decisions under two scenarios, strong brand and weak brand, are analyzed through the system dynamics simulation. The results show that the input of OWOM can enhance brand goodwill and improve earnings. It further promotes the OWOM reputation and improves the supply chain performance if manufacturers share the cost of OWOM with retailers. Then, in order to eliminate the retailers from word-of-mouth fraud and establish a fair competition mechanism, the third parties (i.e., regulators or e-commerce platforms) should take appropriate punitive measures against retailers. Furthermore, the effect of OWOM on supply chain performance under a strong brand differed from those under a weak brand. Last but not least, if OWOM is improved, there would be more remarkable performance for the weak brand than that for the strong brand in the supply chain.
Variational Dropout and the Local Reparameterization Trick
We investigate a local reparameterizaton technique for greatly reducing the variance of stochastic gradients for variational Bayesian inference (SGVB) of a posterior over model parameters, while retaining parallelizability. This local reparameterization translates uncertainty about global parameters into local noise that is independent across datapoints in the minibatch. Such parameterizations can be trivially parallelized and have variance that is inversely proportional to the minibatch size, generally leading to much faster convergence. Additionally, we explore a connection with dropout: Gaussian dropout objectives correspond to SGVB with local reparameterization, a scale-invariant prior and proportionally fixed posterior variance. Our method allows inference of more flexibly parameterized posteriors; specifically, we propose variational dropout, a generalization of Gaussian dropout where the dropout rates are learned, often leading to better models. The method is demonstrated through several experiments.
Fever phobia in Korean caregivers and its clinical implications.
Fever is the most common complaint among children brought into the emergency department (ED). 'Fever phobia' is a descriptive term for an unrealistic concern about the consequences of fever. 'Fever phobia' is prevalent among parents and even healthcare providers, worldwide. The aim of this study was to determine the implications of fever-phobic ideas in Korean caregivers. A prospective, multi-center survey was conducted on Korean caregivers who visited the EDs with febrile children. In total, 746 caregivers were enrolled. The mean age of the subjects was 34.7 yr (SD±5.0). Three hundred sixty respondents (48.3%) believed that the body temperature of febrile children can reach higher than 42.0℃. Unrealistic concerns about the improbable complications of fever, such as brain damage, unconsciousness, and loss of hearing/vision were believed by 295 (39.5%), 66 (8.8%), and 58 (7.8%) caregivers, respectively. Four hundred ninety-four (66.2%) guardians woke children to give antipyretics. These findings suggest that fever phobia is a substantial burden for Korean caregivers.
Enhanced User Security and Privacy Protection in 4G LTE Network
Although the Evolved Packet System Authentication and Key Agreement (EPS-AKA) provides security and privacy enhancements in 3rd Generation Partnership Project (3GPP), the International Mobile Subscriber Identity (IMSI) is sent in clear text in order to obtain service. Various efforts to provide security mechanisms to protect this unique private identity have not resulted in methods implemented to protect the disclosure of the IMSI. The exposure of the IMSI brings risk to user privacy, and knowledge of it can lead to several passive and active attacks targeted at specific IMSI's and their respective users. Further, the Temporary Mobile Subscribers Identity (TMSI) generated by the Authentication Center (AuC) have been found to be prone to rainbow and brute force attacks, hence an attacker who gets hold of the TMSI can be able to perform social engineering in tracing the TMSI to the corresponding IMSI of a User Equipment (UE). This paper proposes a change to the EPS-AKA authentication process in 4G Long Term Evolution (LTE) Network by including the use of Public Key Infrastructure (PKI). The change would result in the IMSI never being released in the clear in an untrusted network.
Measuring classifier performance: a coherent alternative to the area under the ROC curve
The area under the ROC curve (AUC) is a very widely used measure of performance for classification and diagnostic rules. It has the appealing property of being objective, requiring no subjective input from the user. On the other hand, the AUC has disadvantages, some of which are well known. For example, the AUC can give potentially misleading results if ROC curves cross. However, the AUC also has a much more serious deficiency, and one which appears not to have been previously recognised. This is that it is fundamentally incoherent in terms of misclassification costs: the AUC uses different misclassification cost distributions for different classifiers. This means that using the AUC is equivalent to using different metrics to evaluate different classification rules. It is equivalent to saying that, using one classifier, misclassifying a class 1 point is p times as serious as misclassifying a class 0 point, but, using another classifier, misclassifying a class 1 point is P times as serious, where p≠P. This is nonsensical because the relative severities of different kinds of misclassifications of individual points is a property of the problem, not the classifiers which happen to have been chosen. This property is explored in detail, and a simple valid alternative to the AUC is proposed.
Augmented Reinforcement Learning for Interaction with Non-expert Humans in Agent Domains
In application domains characterized by dynamic changes and non-deterministic action outcomes, it is frequently difficult for agents or robots to operate without any human supervision. Although human feedback can help an agent learn a rich representation of the task and domain, humans may not have the expertise or time to provide elaborate and accurate feedback in complex domains. Widespread deployment of intelligent agents hence requires that the agents operate autonomously using sensory inputs and limited high-level feedback from non-expert human participants. Towards this objective, this paper describes an augmented reinforcement learning framework that combines bootstrap learning and reinforcement learning principles. In the absence of human feedback, the agent learns by interacting with the environment. When high-level human feedback is available, the agent robustly merges it with environmental feedback by incrementally revising the relative contributions of the feedback mechanisms to the action choice policy. The framework is evaluated in two simulated domains: Tetris and Keep away soccer.
DBpedia - A crystallization point for the Web of Data
The DBpedia project is a community effort to extract structured information from Wikipedia and to make this information accessible on the Web. The resulting DBpedia knowledge base currently describes over 2.6 million entities. For each of these entities, DBpedia defines a globally unique identifier that can be dereferenced over the Web into a rich RDF description of the entity, including human-readable definitions in 30 languages, relationships to other resources, classifications in four concept hierarchies, various facts as well as data-level links to other Web data sources describing the entity. Over the last year, an increasing number of data publishers have begun to set data-level links to DBpedia resources, making DBpedia a central interlinking hub for the emerging Web of data. Currently, the Web of interlinked data sources around DBpedia provides approximately 4.7 billion pieces of information and covers domains such as geographic information, people, companies, films, music, genes, drugs, books, and scientific publications. This article describes the extraction of the DBpedia knowledge base, the current status of interlinking DBpedia with other data sources on the Web, and gives an overview of applications that facilitate the Web of Data around DBpedia.
HIV lipodystrophy: prevalence, severity and correlates of risk in Australia.
OBJECTIVE To establish the prevalence, severity and factors associated with the HIV lipodystrophy syndrome. METHODS Cross-sectional study of lipodystrophy conducted in high HIV caseload primary care sites and HIV outpatient clinics. A subset of patients was examined using dual energy X-ray absorptiometry (DEXA) and single cut abdominal computerized tomography (CT) at the L4 vertebral level to quantify regional and total body fat. Factors associated with lipodystrophy, lipoatrophy and lipohypertrophy were assessed using multiple logistic regression based on assignment of cases and non-cases. RESULTS One thousand, three hundred and forty-eight patients (95% male) were surveyed, 20% had AIDS, the mean CD4 lymphocyte count was 486 cells/microL, and 55% had <500 HIV-1 RNA copies/mL. Most participants (87%) had previously received or were currently receiving combination antiretroviral therapy, 73% with at least one protease inhibitor (PI) and 14% a non-PI-containing regimen. Lipodystrophy prevalence was 53% and of these, 55% reported both peripheral lipoatrophy and central lipohypertrophy, 31% experienced peripheral lipoatrophy only and 14% had central lipohypertrophy only. The prevalence of any body habitus change was 62% in PI-experienced patients, 33% in PI-naive patients and 21% in antiretroviral-naive patients. Lipodystrophy severity was less in antiretroviral-naive patients and most severe in PI-experienced patients. Increasing severity of lipodystrophy was both positively and significantly correlated with elevated liver enzymes, decreased testosterone levels, decreased skin-fold thickness, lower levels of total and peripheral fat (DEXA) and higher levels of visceral fat (CT). Lipodystrophy was also significantly associated with increasing age, symptomatic HIV disease, effective viral suppression, and increasing duration of therapy with both nucleoside reverse transcriptase inhibitors and PIs. CONCLUSIONS The prevalence and severity of lipodystrophy reflects both length and type of treatment with antiretroviral therapy and is associated with decreased testosterone, increases in liver enzymes and greater suppression of HIV RNA. The reports of lipodystrophy in a small percentage of antiretroviral-naive patients suggests that factors other than antiretroviral therapy may be involved in the aetiology of this syndrome or that some conditions, such as wasting or age-associated obesity, may mimic lipoatrophy and lipohypertrophy, respectively.
Sleep Paralysis in Brazilian Folklore and Other Cultures: A Brief Review
Sleep paralysis (SP) is a dissociative state that occurs mainly during awakening. SP is characterized by altered motor, perceptual, emotional and cognitive functions, such as inability to perform voluntary movements, visual hallucinations, feelings of chest pressure, delusions about a frightening presence and, in some cases, fear of impending death. Most people experience SP rarely, but typically when sleeping in supine position; however, SP is considered a disease (parasomnia) when recurrent and/or associated to emotional burden. Interestingly, throughout human history, different peoples interpreted SP under a supernatural view. For example, Canadian Eskimos attribute SP to spells of shamans, who hinder the ability to move, and provoke hallucinations of a shapeless presence. In the Japanese tradition, SP is due to a vengeful spirit who suffocates his enemies while sleeping. In Nigerian culture, a female demon attacks during dreaming and provokes paralysis. A modern manifestation of SP is the report of "alien abductions", experienced as inability to move during awakening associated with visual hallucinations of aliens. In all, SP is a significant example of how a specific biological phenomenon can be interpreted and shaped by different cultural contexts. In order to further explore the ethnopsychology of SP, in this review we present the "Pisadeira", a character of Brazilian folklore originated in the country's Southeast, but also found in other regions with variant names. Pisadeira is described as a crone with long fingernails who lurks on roofs at night and tramples on the chest of those who sleep on a full stomach with the belly up. This legend is mentioned in many anthropological accounts; however, we found no comprehensive reference on the Pisadeira from the perspective of sleep science. Here, we aim to fill this gap. We first review the neuropsychological aspects of SP, and then present the folk tale of the Pisadeira. Finally, we summarize the many historical and artistic manifestations of SP in different cultures, emphasizing the similarities and differences with the Pisadeira.
The Mario AI Championship 2009-2012
We give a brief overview of the Mario AI Championship, a series of competitions based on an open source clone of the seminal platform game Super Mario Bros. The competition has four tracks. The gameplay and learning tracks resemble traditional reinforcement learning competitions, the Level generation track focuses on the generation of entertaining game levels, and the Turing Test track focuses on human-like game-playing behaviour. We also outline some lessons learned from the competition and its future. The paper is written by the four organisers of the competition.
Mitophagy: a balance regulator of NLRP3 inflammasome activation
The NLRP3 inflammasome is activated by a variety of external or host-derived stimuli and its activation initiates an inflammatory response through caspase-1 activation, resulting in inflammatory cytokine IL-1β maturation and secretion. The NLRP3 inflammasome activation is a kind of innate immune response, most likely mediated by myeloid cells acting as a host defense mechanism. However, if this activation is not properly regulated, excessive inflammation induced by overactivated NLRP3 inflammasome can be detrimental to the host, causing tissue damage and organ dysfunction, eventually causing several diseases. Previous studies have suggested that mitochondrial damage may be a cause of NLRP3 inflammasome activation and autophagy, which is a conserved self-degradation process that negatively regulates NLRP3 inflammasome activation. Recently, mitochondria-selective autophagy, termed mitophagy, has emerged as a central player for maintaining mitochondrial homeostasis through the elimination of damaged mitochondria, leading to the prevention of hyperinflammation triggered by NLRP3 inflammasome activation. In this review, we will first focus on the molecular mechanisms of NLRP3 inflammasome activation and NLRP3 inflammasome-related diseases. We will then discuss autophagy, especially mitophagy, as a negative regulator of NLPP3 inflammasome activation by examining recent advances in research. [BMB Reports 2016; 49(10): 529-535].
Randomised study of long term outcome after epidural versus non-epidural analgesia during labour.
OBJECTIVE To determine whether epidural analgesia during labour is associated with long term backache. DESIGN Follow up after randomised controlled trial. Analysis by intention to treat. SETTING Department of obstetrics and gynaecology at one NHS trust. PARTICIPANTS 369 women: 184 randomised to epidural group (treatment as allocated received by 123) and 185 randomised to non-epidural group (treatment as allocated received by 133). In the follow up study 151 women were from the epidural group and 155 from the non-epidural group. MAIN OUTCOME MEASURES Self reported low back pain, disability, and limitation of movement assessed through one to one interviews with physiotherapist, questionnaire on back pain and disability, physical measurements of spinal mobility. RESULTS There were no significant differences between groups in demographic details or other key characteristics. The mean time interval from delivery to interview was 26 months. There were no significant differences in the onset or duration of low back pain, with nearly a third of women in each group reporting pain in the week before interview. There were no differences in self reported measures of disability in activities of daily living and no significant differences in measurements of spinal mobility. CONCLUSIONS After childbirth there are no differences in the incidence of long term low back pain, disability, or movement restriction between women who receive epidural pain relief and women who receive other forms of pain relief.
OPENDPI: A Toolkit for Developing Document-Centered Environments
Documents are ubiquitous in modern desktop environments, yet these environments are based on the notion of application rather than document. As a result, editing a document often requires juggling with several applications to edit its different parts. This paper presents OpenDPI, an experimental user-interface toolkit designed to create document-centered environments, therefore getting rid of the concept of application. OpenDPI relies on the DPI (Document, Presentation, Instrument) model: documents are visualized through one or more presentations, and manipulated with interaction instruments. The implementation is based on a component model that cleanly separates documents from their presentations and from the instruments that edit them. OpenDPI supports advanced visualization and interaction techniques such as magic lenses and bimanual interaction. Document sharing is also supported with single display groupware as well as remote shared editing. The paper describes the component model and illustrates the use of the toolkit through concrete examples, including multiple views and concurrent interaction.
COLOMBO: Investigating the Potential of V2X for Traffic Management Purposes assuming low penetration Rates
After the Vehicular Communication (V2X) technology roll out in 2015, the number of equipped vehicles is assumed to increase slowly. While many Day One V2X applications are related to traffic safety and require a high penetration rate and communication reliability, traffic management applications could still benefit from even few equipped vehicles. Considering local V2X-based traffic surveillance based on a low rate of V2X technology, traffic light control could dynamically adapt priorities depending on traffic flows and volumes. In order to mitigate the low rate of V2X technology, already deployed solutions for wireless ad-hoc communications, such as WiFi-direct, available on most smartphones (often on-board of regular vehicles), should be investigated and exploited as complementary source of information, with full awareness of their strong reliability and performance limitations. The COLOMBO project, which is co-funded by the European Commission and presented within this report, focuses on such use of information either from a small subset of V2X-equipped vehicles only, or complementary to other wireless ad-hoc technologies and tries to exploit this information for traffic surveillance and an adaptive, optimized control of traffic lights.
Flexible traffic management in broadband access networks using Software Defined Networking
Over the years, the demand for high bandwidth services, such as live and on-demand video streaming, steadily increased. The adequate provisioning of such services is challenging and requires complex network management mechanisms to be implemented by Internet service providers (ISPs). In current broadband network architectures, the traffic of subscribers is tunneled through a single aggregation point, independent of the different service types it belongs to. While having a single aggregation point eases the management of subscribers for the ISP, it implies huge bandwidth requirements for the aggregation point and potentially high end-to-end latency for subscribers. An alternative would be a distributed subscriber management, adding more complexity to the management itself. In this paper, a new traffic management architecture is proposed that uses the concept of Software Defined Networking (SDN) to extend the existing Ethernet-based broadband network architecture, enabling a more efficient traffic management for an ISP. By using SDN-enabled home gateways, the ISP can configure traffic flows more dynamically, optimizing throughput in the network, especially for bandwidth-intensive services. Furthermore, a proof-of-concept implementation of the approach is presented to show the general feasibility and study configuration tradeoffs. Analytic considerations and testbed measurements show that the approach scales well with an increasing number of subscriber sessions.
Optimizing Seed Selection for Fuzzing
Randomly mutating well-formed program inputs or simply fuzzing, is a highly effective and widely used strategy to find bugs in software. Other than showing fuzzers find bugs, there has been little systematic effort in understanding the science of how to fuzz properly. In this paper, we focus on how to mathematically formulate and reason about one critical aspect in fuzzing: how best to pick seed files to maximize the total number of bugs found during a fuzz campaign. We design and evaluate six different algorithms using over 650 CPU days on Amazon Elastic Compute Cloud (EC2) to provide ground truth data. Overall, we find 240 bugs in 8 applications and show that the choice of algorithm can greatly increase the number of bugs found. We also show that current seed selection strategies as found in Peach may fare no better than picking seeds at random. We make our data set and code publicly available.
101companies: A Community Project on Software Technologies and Software Languages
101companies is a community project in computer science (or software science) with the objective of developing a free, structured, wiki-accessible knowledge resource including an open-source repository for different stakeholders with interests in software technologies, software languages, and technological spaces; notably: teachers and learners in software engineering or software languages as well as software developers, software technologists, and ontologists. The present paper introduces the 101companies Project. In fact, the present paper is effectively a call for contributions to the project and a call for applications of the project in research and education.
The impact of blended teaching on knowledge, satisfaction, and self-directed learning in nursing undergraduates: a randomized, controlled trial.
AIM This study aimed to assess the effectiveness of a blended-teaching intervention using Internet-based tutorials coupled with traditional lectures in an introduction to research undergraduate nursing course. Effects of the intervention were compared with conventional, face-to-face classroom teaching on three outcomes: knowledge, satisfaction, and self-learning readiness. METHOD A two-group, randomized, controlled design was used, involving 112 participants. Descriptive statistics and analysis of covariance (ANCOVA) were performed. RESULTS The teaching method was found to have no direct impact on knowledge acquisition, satisfaction, and self-learning readiness. However, motivation and teaching method had an interaction effect on knowledge acquisition by students. Among less motivated students, those in the intervention group performed better than those who received traditional training. CONCLUSION These findings suggest that this blended-teaching method could better suit some students, depending on their degree of motivation and level of self-directed learning readiness.
Byzantine Attack and Defense in Cognitive Radio Networks: A Survey
The Byzantine attack in cooperative spectrum sensing (CSS), also known as the spectrum sensing data falsification (SSDF) attack in the literature, is one of the key adversaries to the success of cognitive radio networks (CRNs). Over the past couple of years, the research on the Byzantine attack and defense strategies has gained worldwide increasing attention. In this paper, we provide a comprehensive survey and tutorial on the recent advances in the Byzantine attack and defense for CSS in CRNs. Specifically, we first briefly present the preliminaries of CSS for general readers, including signal detection techniques, hypothesis testing, and data fusion. Second, we propose a taxonomy of the existing Byzantine attack behaviors and elaborate on the corresponding attack parameters, which determine where, who, how, and when to launch attacks. Then, from the perspectives of homogeneous or heterogeneous scenarios, we classify the existing defense algorithms, and provide an in-depth tutorial on the state-of-the-art Byzantine defense schemes, commonly known as robust or secure CSS in the literature. Furthermore, we analyze the spear-and-shield relation between Byzantine attack and defense from an interactive game-theoretical perspective. Moreover, we highlight the unsolved research challenges and depict the future research directions.
Comprehensible reasoning and automated reporting of medical examinations based on deep learning analysis
In the future, medical doctors will to an increasing degree be assisted by deep learning neural networks for disease detection during examinations of patients. In order to make qualified decisions, the black box of deep learning must be opened to increase the understanding of the reasoning behind the decision of the machine learning system. Furthermore, preparing reports after the examinations is a significant part of a doctors work-day, but if we already have a system dissecting the neural network for understanding, the same tool can be used for automatic report generation. In this demo, we describe a system that analyses medical videos from the gastrointestinal tract. Our system dissects the Tensorflow-based neural network to provide insights into the analysis and uses the resulting classification and rationale behind the classification to automatically generate an examination report for the patient's medical journal.
A novel approach to OFDM radar processing
In this paper a novel approach for the calculation of the radar range profile in OFDM radar systems is presented. The proposed algorithm operates directly on the modulation symbols and overcomes the typical drawbacks of correlation based baseband signal processing. An example system configuration as well as simulation results from a dedicated MatLab model will be presented and discussed.
MIMO for ATSC 3.0
This paper provides an overview of the optional multiple-input multiple-output (MIMO) antenna scheme adopted in ATSC 3.0 to improve robustness or increase capacity via additional spatial diversity and multiplexing by sending two data streams in a single radio frequency channel. Although it is not directly specified, it is expected in practice to use cross-polarized 2×2 MIMO (i.e., horizontal and vertical polarization) to retain multiplexing capabilities in line-of-sight conditions. MIMO allows overcoming the channel capacity limit of single antenna wireless communications in a given channel bandwidth without any increase in the total transmission power. But in the U.S. MIMO can actually provide a larger comparative gain because it would be allowed to increase the total transmit power, by transmitting the nominal transmit power in each polarization. Hence, in addition to the MIMO gains (array, diversity, and spatial multiplexing), MIMO could exploit an additional 3 dB power gain. The MIMO scheme adopted in ATSC 3.0 re-uses the single-input single-output antenna baseline constellations, and hence it introduces the use of MIMO with non-uniform constellations.
Analysis on metallogenic conditions of ISL-amenable sandstone-type uranium deposits in Kelulun down-faulted basin,Inner Mongolia
Kelulun down-faulted basin is a secondary tectonic unit of Hailaer basin.It is located at the southwestern margin of the Hailaer basin and is a rectangular-shaped basin stretching in NE direction with an area of about 2000 km~2.The northwestern margin of the basin is a tectonic slope with a complete and independent recharge-run off-discharge system of groundwater.Basement rocks are characterized by high original uranium content,moreover,most uranium in basement rocks may be reactivated and released during epigenetic reworking process.The paleoclimate in Kelulun basin experienced an evolution from mild-humid to hot-arid period,being favorable for the metallogenesis of ISL-amenable sandstone-hosted uranium deposits.The upper member of the Damoguaihe Formation,Lower Crtaceous is regarded as the prospecting target in Kelulun basin because favorable sand bodies occur in the member.Two uranium mineralization types have been recognized,namely the interlayer oxidation zone type and the phreatic oxidation type.In a word,favorable uranium metallogenic conditions exist in Kelulun down-faulted basin.It is of further prospecting potential.
Understanding the Odd Science of Aging
Evolutionary considerations suggest aging is caused not by active gene programming but by evolved limitations in somatic maintenance, resulting in a build-up of damage. Ecological factors such as hazard rates and food availability influence the trade-offs between investing in growth, reproduction, and somatic survival, explaining why species evolved different life spans and why aging rate can sometimes be altered, for example, by dietary restriction. To understand the cell and molecular basis of aging is to unravel the multiplicity of mechanisms causing damage to accumulate and the complex array of systems working to keep damage at bay.
A Comparison Between Single-Phase Quasi- $Z$-Source and Quasi-Switched Boost Inverters
The properties of a single-phase quasi Z-source inverter (qZSI) and a single-phase quasi-switched boost inverter (qSBI), both of which are single-stage buck-boost inverters, are investigated and compared. For the same operating conditions, qSBI has the following advantages over qZSI: 1) Three capacitors are saved; 2) the current rating on both of its switches and diodes is lower; 3) its boost factor is higher with an equivalent parasitic effect; and 4) its efficiency is higher. However, qSBI has one more active switch and one more diode than Z-source/ qZSIs. In addition, the capacitor voltage stress of qSBI is higher than that of qZSI. The dc and ac component circuit analysis, impedance design with low-frequency and high-frequency ripples, component stresses, and power loss calculation are presented. A prototype based on a TMS320F28335 DSP is built in order to compare the operating principle of qSBI and qZSI.
Flora-2: A Rule-Based Knowledge Representation and Inference Infrastructure for the Semantic Web
Flora-2 is a rule-based object-oriented knowledge base system designed for a variety of automated tasks on the Semantic Web, ranging from meta-data management to information integration to intelligent agents. The Flora-2 system integrates F-logic, HiLog, and Transaction Logic into a coherent knowledge representation and inference language. The result is a flexible and natural framework that combines rule-based and object-oriented paradigms. This paper discusses the principles underlying the design of the Flora-2 system and describes its salient features, including meta-programming, reification, logical database updates, encapsulation, and support for dynamic modules.
FlashGraph: Processing Billion-Node Graphs on an Array of Commodity SSDs
Graph analysis performs many random reads and writes, thus, these workloads are typically performed in memory. Traditionally, analyzing large graphs requires a cluster of machines so the aggregate memory exceeds the graph size. We demonstrate that a multicore server can process graphs with billions of vertices and hundreds of billions of edges, utilizing commodity SSDs with minimal performance loss. We do so by implementing a graph-processing engine on top of a user-space SSD file system designed for high IOPS and extreme parallelism. Our semi-external memory graph engine called FlashGraph stores vertex state in memory and edge lists on SSDs. It hides latency by overlapping computation with I/O. To save I/O bandwidth, FlashGraph only accesses edge lists requested by applications from SSDs; to increase I/O throughput and reduce CPU overhead for I/O, it conservatively merges I/O requests. These designs maximize performance for applications with different I/O characteristics. FlashGraph exposes a general and flexible vertex-centric programming interface that can express a wide variety of graph algorithms and their optimizations. We demonstrate that FlashGraph in semi-external memory performs many algorithms with performance up to 80% of its in-memory implementation and significantly outperforms PowerGraph, a popular distributed in-memory graph engine.
Declarative Support for Sensor Data Cleaning
Pervasive applications rely on data captured from the physical world through sensor devices. Data provided by these devices, however, tend to be unreliable. The data must, therefore, be cleaned before an application can make use of them, leading to additional complexity for application development and deployment. Here we present Extensible Sensor stream Processing (ESP), a framework for building sensor data cleaning infrastructures for use in pervasive applications. ESP is designed as a pipeline using declarative cleaning mechanisms based on spatial and temporal characteristics of sensor data. We demonstrate ESP’s effectiveness and ease of use through three real-world scenarios.
Knowledge Engineering and Knowledge Management Methods, Models, and Tools
Currently computers are changing from single isolated devices into entry points into a worldwide network of information exchange and business transactions. Support in data, information, and knowledge exchange is becoming the key issue in current computer technology. Ontologies will play a major role in supporting information exchange processes in various areas. A prerequisite for such a role is the development of a joint standard for specifying and exchanging ontologies. The purpose of the paper is precisely concerned with this necessity. We will present OIL, which is a proposal for such a standard. It is based on existing proposals such as OKBC, XOL and RDF schema, enriching them with necessary features for expressing ontologies. The paper sketches the main ideas of OIL.
Visualization Criticism - The Missing Link Between Information Visualization and Art
Classifications of visualization are often based on technical criteria, and leave out artistic ways of visualizing information. Understanding the differences between information visualization and other forms of visual communication provides important insights into the way the field works, though, and also shows the path to new approaches. We propose a classification of several types of information visualization based on aesthetic criteria. The notions of artistic and pragmatic visualization are introduced, and their properties discussed. Finally, the idea of visualization criticism is proposed, and its rules are laid out. Visualization criticism bridges the gap between design, art, and technical/pragmatic information visualization. It guides the view away from implementation details and single mouse clicks to the meaning of a visualization.
Autoencoders trained with relevant information: Blending Shannon and Wiener's perspectives
It is almost seventy years after the publication of Claude Shannon's “A Mathematical Theory of Communication” [1] and Norbert Wiener's “Extrapolation, Interpolation and Smoothing of Stationary Time Series” [2]. The pioneering works of Shannon and Wiener lay the foundation of communication, data storage, control, and other information technologies. This paper briefly reviews Shannon and Wiener's perspectives on the problem of message transmission over noisy channel and also experimentally evaluates the feasibility of integrating these two perspectives to train autoencoders close to the information limit. To this end, the principle of relevant information (PRI) is used and validated to optimally encode input imagery in the presence of noise.
SHAPE: a computer program package for quantitative evaluation of biological shapes based on elliptic Fourier descriptors.
Quantitative evaluation of the shapes of biological organs is often required in various research fields, such as agronomy, medicine, genetics, ecology, and taxonomy. Elliptic Fourier descriptors (EFDs), proposed by Kuhl and Giardina (1982), can delineate any type of shape with a closed two-dimensional contour and have been effectively applied to the evaluation of various biological shapes in animals (Bierbaum and Ferson 1986; Diaz et al. 1989; Ferson et al. 1985; Rohlf and Archie 1984) and plants (Furuta et al. 1995; Iwata et al. 1998; McLellan 1993; Ohsawa et al. 1998; White et al. 1988). Quantization of shapes is a prerequisite for evaluating the inheritance of morphological traits in quantitative genetics. There are many reports showing that measurements based on EFDs are helpful for such quantization of the shapes of plant and animal organs. For instance, Iwata et al. (2000) conducted a diallele analysis of the shape of Japanese radish (Raphanus sativus L.) roots, using the principal component scores of the EFDs as shape characteristics. Quantitative trait loci (QTL) analysis has also been conducted using the principal component scores of EFDs concerning the shape of the male genitalia of Drosophila species (Laurie et al. 1997; Liu et al. 1996). The shape evaluation method based on EFDs can be a powerful tool for analyzing biological shapes, but it is not easy for a researcher to use this method because it involves several complex procedures, such as image processing, contour recording, derivation of the descriptors, and multivariate analysis of the descriptors. In this article we present SHAPE, a package of programs for evaluating biological contour shapes based on EFDs. This package contains programs for image processing, contour recording, derivation of EFDs, principal component analysis of EFDs, and visualization of shape variations estimated by the principal components. With the aid of this package, a researcher can easily analyze shapes on a personal computer without special knowledge about the procedures related to the method. The principal component scores obtained by the procedures can be used directly as observed values of shape characteristics for the subsequent analyses. SHAPE is characterized by the following features: (1) The packaged programs are easily operated with the aid of a graphical user interface (GUI); (2) No special computer devices for image processing are required; (3) A large number of samples (say 1,000) can be treated; (4) The scores of principal components are stored in tabbed text format files and can be easily exported for analysis by other software; and (5) The variations in shape accounted for by the principal components can be visualized and printed out.
Peculiar Velocities of Nonlinear Structure: Voids in McVittie Spacetime
As a study of peculiar velocities of nonlinear structure, we analyze the model of a relativistic thin-shell void in the expanding universe. First, adopting McVittie (MV) spacetime as a background universe, we investigate the dynamics of an uncompensated void with negative MV mass. Although the motion itself is quite different from that of a compensated void, as shown by Haines & Harris, the present peculiar velocities are not affected by MV mass. Second, we discuss how precisely the formula in the linear perturbation theory applies to nonlinear relativistic voids, using the results of our first investigation as well as previous results for the homogeneous background (Sakai, Maeda, & Sato). Third, we reexamine the effect of the cosmic microwave background radiation. Contrary to the results of Pim & Lake, we find that the effect is negligible. We show that their results are due to inappropriate initial conditions. Our results in the three parts of our study suggest that the formula in the linear perturbation theory is approximately valid even for nonlinear voids.
Large-diameter Delta ceramic-on-ceramic versus common-sized ceramic-on-polyethylene bearings in THA.
The higher failure rate of total hip arthroplasty (THA) in young, active patients remains a challenge for surgeons. Recently, larger-diameter femoral heads combined with an alumina matrix composite ceramic (BIOLOX Delta; CeramTec AG, Plochingen, Germany) articulation was developed to improve implant longevity and meet patients' activity demands while reducing the risk of component-related complications. The purpose of this study was to determine whether this new device may provide advantages for young, active patients. A prospective, randomized, controlled trial was conducted on 93 patients (113 THAs) with more than 3 years of follow-up. Patients were randomly divided into a study group (51 THAs) with a 36-mm Delta ceramic-on-ceramic (COC) articulation and a control group (62 THAs) with a common-sized alumina ceramic head on polyethylene liner (COP) articulation. Clinical and radiographic results were collected to compare the outcomes and complications, including implant-related failures, osteolysis, and noises. The large-diameter Delta COC articulation provided greater range of motion improvement (6.1° more), similar Harris Hip Scores, and similar complication rates compared with the alumina COP articulation. This study suggests that in the short term, the large-diameter Delta COC articulation results in better range of motion with no higher complication rates; however, mid-term (8-10 years) or longer follow-up is necessary to determine its superiority in young, active patients.