diff --git "a/deduped/dedup_0961.jsonl" "b/deduped/dedup_0961.jsonl" new file mode 100644--- /dev/null +++ "b/deduped/dedup_0961.jsonl" @@ -0,0 +1,41 @@ +{"text": "Quantification of in-vivo biomolecule mass transport and reaction rate parameters from experimental data obtained by Fluorescence Recovery after Photobleaching (FRAP) is becoming more important.The Osborne-Mor\u00e9 extended version of the Levenberg-Marquardt optimization algorithm was coupled with the experimental data obtained by the Fluorescence Recovery after Photobleaching (FRAP) protocol, and the numerical solution of a set of two partial differential equations governing macromolecule mass transport and reaction in living cells, to inversely estimate optimized values of the molecular diffusion coefficient and binding rate parameters of GFP-tagged glucocorticoid receptor. The results indicate that the FRAP protocol provides enough information to estimate one parameter uniquely using a nonlinear optimization technique. Coupling FRAP experimental data with the inverse modeling strategy, one can also uniquely estimate the individual values of the binding rate coefficients if the molecular diffusion coefficient is known. One can also simultaneously estimate the dissociation rate parameter and molecular diffusion coefficient given the pseudo-association rate parameter is known. However, the protocol provides insufficient information for unique simultaneous estimation of three parameters (diffusion coefficient and binding rate parameters) owing to the high intercorrelation between the molecular diffusion coefficient and pseudo-association rate parameter. Attempts to estimate macromolecule mass transport and binding rate parameters simultaneously from FRAP data result in misleading conclusions regarding concentrations of free macromolecule and bound complex inside the cell, average binding time per vacant site, average time for diffusion of macromolecules from one site to the next, and slow or rapid mobility of biomolecules in cells.To obtain unique values for molecular diffusion coefficient and binding rate parameters from FRAP data, we propose conducting two FRAP experiments on the same class of macromolecule and cell. One experiment should be used to measure the molecular diffusion coefficient independently of binding in an effective diffusion regime and the other should be conducted in a reaction dominant or reaction-diffusion regime to quantify binding rate parameters. The method described in this paper is likely to be widely used to estimate in-vivo biomolecule mass transport and binding rate parameters. Transport of biomolecules in small systems such as living cells is a function of diffusion, reactions, catalytic activities, and advection. Innovative experimental protocols and mathematical modeling of the dynamics of intracellular biomolecules are key tools for understanding biological processes and identifying their relative importance. One of the most widely used techniques for studying in vitro and in vivo diffusion and binding reactions, nuclear protein mobility, solute and biomolecule transport through cell membranes, lateral diffusion of lipids in cell membranes, and biomolecule diffusion within the cytoplasm and nucleus, is Fluorescence Recovery after Photobleaching (FRAP). The technique was developed in the 1970s and was initially used to study lateral diffusion of lipids through the cell membrane -14. A deThe number and complexity of quantitative analyses of the FRAP protocol have increased over the years. Early analyses characterized diffusion alone ,16-18. M\u03bcm2 s-l for GFP-GR and fitted two binding rate parameters by curve fitting. On the basis of these parameters they concluded that GFP-GR diffuses from one binding site to the next with an average time of 2.5 ms; the average binding time per site is 12.7 ms. They also concluded that 14% of the GFP-GR is free and 86% is bound. There have been other theoretical investigations of full diffusion-reaction models in FRAP experiments for GFP-GR using the experimental FRAP data. To test the uniqueness of the model parameters, the optimization algorithm was carried out using different initial guesses for the parameter vector . In scenario B, two of the three parameters in one-site-mobile-immobile model were kept constant and the third was estimated. The goal was to determine whether or not the FRAP protocol produces enough information to estimate one parameter uniquely. The optimization algorithm was used to estimate a single parameter for both noise-free and noisy data. In scenario C, pairs of model parameters were estimated under the assumption that the value of the third parameter is known. In the first attempt, the optimized values of the individual binding rate coefficients were quantified given a known value for the free molecular diffusion coefficient of the GFP-GR. Again the optimization algorithm was used for both noise-free and noisy data. Given the value of the pseudo-association rate, the optimized values of the molecular diffusion coefficient and dissociation rate coefficient were then estimated. Assuming that the \"true\" value of the dissociation rate coefficient is known, we tried to estimate the optimized values of the free molecular diffusion coefficient and the pseudo-association rate parameter. Again, the goal was to determine which pairs of parameters, if any, can be estimated uniquely using FRAP data. Finally, in scenario D, we investigated the possibility of simultaneous estimation of three parameters of the one-site-mobile-immobile model using noise-free FRAP data.Four optimization scenarios were considered. In scenario A, the developed inverse modeling strategy was used to identify three unknown parameters \u00a0\u00a0\u00a0(24)Equation (24) identifies the degree of linear correlation between the optimized parameters. In other words, the correlation matrix quantifies the nonorthogonality between two parameters. A value of \u00b1 1 reflects perfect linear correlation between two parameters whereas 0 indicates no correlation at all. The matrix may be used to identify which parameter, if any, is kept constant in the parameter optimization process because of high intercorrelation . The corwhere the diagonal elements of the matrix are the correlations of each parameter with itself (i.e. unity).Df and Kd as well as between Ka and Kd. We also expect a positive correlation between Ka and Df.The correlation between the molecular diffusion coefficient and the pseudo-association rate parameter is The high intercorrelation between the molecular diffusion coefficient and the pseudo-association rate coefficient makes it impossible to obtain a unique solution for the inverse problem using experimental data from the FRAP protocol. The common practice in these situations is to fix one parameter and estimate the other by parameter optimization algorithms.In this scenario we tried to estimate the optimized values of the mass transport and binding rate coefficients for noise-free data. As Table The optimization scenarios considered above show a possible way of obtaining unique values for diffusion coefficient and binding rate parameters of biomolecules inside living cells. A possible procedure for obtaining unique values for molecular diffusion coefficient and reaction rate parameters of macro-molecules is to conduct two FRAP experiments in different regimes on the same class of cell and biomolecule. One experiment should be conducted in an effective diffusion regime to estimate diffusion coefficient independent of binding. The other should be performed in reaction dominant or diffusion-reaction dominant regimes to identify the binding rate parameters. Conducting two FRAP experiments in two different regimes is, however, beyond the scope of the present study. It will be pursued in future research.To study the non-uniqueness problem from another angle, we performed a posedness analysis of the inverse problem. A problem is ill-posed when it either has no solution at all, or no unique solution, or the solution is not stable . GeneralN noise to each measurement. The resulting noisy data were then used as input for parameter optimization algorithm. The results are given in Tables Instability occurs when the estimated parameters are excessively sensitive to the input data. Any small errors in measurements will then lead to significant error in estimated values of parameters . To perfN0, \u03c3 noise toNon-uniqueness occurs when multiple parameter vectors can produce almost the same values of the objective function, thus making it impossible to obtain a unique solution . This prWhereas the only solution for the first case is to fix one of the parameters and estimating the other, performing multi-objective optimization or conducting different experiments in which different state variables are measured may lead to a unique solution in the second case.To investigate the non-uniqueness of the inverse problem further, the two-dimensional parameter response surfaces were constructed and analyzed:Df - Df- Kd, and Kd. The response surfaces were calculated using a rectangular grid. The domain of each parameter was discretized into 100 discrete points resulting in 10000 grid points for each response surface plot.The uniqueness of the inverse problem was evaluated by constructing two-dimensional parameter response surfaces of the objective function, \u03a6 presented in Figure Kd is fixed on the known value (0.1108s-1). The dark blue area on the slice has the same error level (objective function) indicating that any combination of Df and Df and Figures Df - Kd and Kd planes are presented in Figures Df direction. As Kd increases the objective function becomes sensitive to the changes in the free molecular diffusion coefficient, which makes it possible to identify this parameter. For large values of Kd, the objective function becomes insensitive to the dissociation rate coefficient, which produces an elongated valley in the Kd direction. In a small region where the objective function is sensitive to both parameters, it is possible to identify both parameters. Parameter optimization in this zone will produce small estimation variance and narrow confidence intervals.The contours of the objective function in Kd plane was used to construct response surfaces. The results are depicted in Figures -1, Kd = 86.4s-1) for these parameters . Having determined the diffusion coefficient, one can determine the individual values of the reaction rate coefficients in another FRAP experiment conducted in reaction dominant or reaction-diffusion regimes.4. One possible approach to estimating the mass transport and binding rate parameters uniquely from the FRAP protocol is to conduct two FRAP experiments on the same class of macromolecule and cell. One experiment may be used to measure the molecular diffusion coefficient of the biomolecule independent of binding in an Df, Kd) were estimated by minimizing an appropriate objective function that represents the discrepancy between observed and predicted FRAP. When the measurement errors asymptotically follow a multivariate normal distribution with zero mean and covariance matrix, V, the likelihood function, L(\u03b2), can be formulated as \u00a0\u00a0\u00a0 (A3)E is the statistical expectation.where The maximum of the likelihood function must satisfy the set of equations:\u03b2 that maximize Eq. (A2) also minimize the equation below):When the error covariance matrix is known, maximization of Eq. (A2) is equivalent to the minimization of the following weighted least square problem [(U* - U(\u03b2))T V-1 (U* - U(\u03b2))] \u00a0\u00a0\u00a0 (A5)Furthermore, if information is available about the values and distribution of the parameters being optimized, it can be incorporated in the objective function by modifying it to:\u03c6(\u03b2) = [(U* - U(\u03b2))TV-1 (U* - U(\u03b2))] + [, which is sometimes called the plausibility criterion or penalty function, ensures that the optimized values of the parameters remain in some feasible region around \u03b2*. Matrices Vand V\u03b2, which are sometimes called weighting matrices, provide information about the measurement accuracy as well as any possible correlation between measurement errors and between parameters [where rameters .V equal to the identity matrix and V\u03b2 to zero [An obvious limitation of Eq. (A6) is that the error covariance matrix is generally not known. A common approach to overcoming this problem is to make some a priori assumptions about the structure of the error covariance matrix. In the absence of any additional information regarding the accuracy of input data, the simplest and most recommended way is to assume that the observation errors are uncorrelated, which implies setting to zero . In thisMany techniques have been developed in the past to solve nonlinear optimization problems ,38,44. Tpk) (= -(J(pk)T (J(pk) + \u03bbD(pk)T D(pk))-1 J(pk)Tr(pk) \u00a0\u00a0\u00a0 (A7)\u0394\u03bb is a positive scalar known as Marquardt's parameter or Lagrange multiplier, J is the Jacobian or sensitivity matrix, and D is a scaling positive definite matrix that ensures the descent property of the algorithm even if the initial guess is not \"smart\". For non-zero values of \u03bb, the Hessian approximation is always a positive definite matrix, which ensures the descent property of the algorithm.where D is the identity matrix, the Levenberg-Marquardt algorithm interpolates between the steepest descent (\u03bb \u2192 +\u221e) and the Gauss-Newton (\u03bb \u2192 0) methods [J(pk)TJ(pk), is a sufficient approximation for the Hessian, and equations (A7), which are the normal equations for Eq. (16), failed to converge to solution in the optimization problem considered in this study. The reason for failure was computation of ill-conditioned J(pk)TJ(pk). To avoid this problem, we solved the linear least square problem (Eq. (16)) by QR decomposition [If methods . The ste methods . The Gauposition ."} +{"text": "Simulation methods can assist in describing and understanding complex networks of interacting proteins, providing fresh insights into the function and regulation of biological systems. Recent studies have investigated such processes by explicitly modelling the diffusion and interactions of individual molecules. In these approaches, two entities are considered to have interacted if they come within a set cutoff distance of each other.In this study, a new model of bimolecular interactions is presented that uses a simple, probability-based description of the reaction process. This description is well-suited to simulations on timescales relevant to biological systems (from seconds to hours), and provides an alternative to the previous description given by Smoluchowski. In the present approach (TFB) the diffusion process is explicitly taken into account in generating the probability that two freely diffusing chemical entities will interact within a given time interval. It is compared to the Smoluchowski method, as modified by Andrews and Bray (AB).When implemented, the AB & TFB methods give equivalent results in a variety of situations relevant to biology. Overall, the Smoluchowski method as modified by Andrews and Bray emerges as the most simple, robust and efficient method for simulating biological diffusion-reaction processes currently available. Molecular biology is moving to an age where the amount of data and its complexity challenge our efforts to understand it. Many recent experimental studies have concentrated on obtaining accurate protein-protein interaction maps for genomes, ranging from unicellular organisms to human. Combining experimental data with modelling makes it possible to tackle this new wealth of information and study the way function emerges from protein interaction networks (for reviews of this field see references -3).etal = 40 nM) with kE = 106 M-1s-1. The diffusion constants and timestep parameters where again varied as previously. The gradient generated were found to be identical (p > 0.9 on U-test). All statistical analyses were performed using the R package [Finally, the present and corrected Smoluchowski approaches were also compared in a situation containing a concentration gradient. The concentration gradient was produced by a point source of a molecule A with the protein concentration quickly following due to degradation. Figure Figure \u03b4t in the diffusion process the distance between the chemical entities is checked and if they come into close proximity (distance d <\u03c3b) the two entities are said to have reacted together. The downside of the approach is that many diffusion steps need to be computed to simulate the reaction kinetics accurately. The second approach (B) is that of Andrews and Bray [\u03c3b, is adjusted so that the correct reaction kinetics are reproduced for timesteps \u0394t \u2265 100 \u00d7 \u03b4t. This approach produces an efficient algorithm that yields the correct reaction kinetics while using larger timesteps. Finally, (C) illustrates the present approach where the reaction radius is replaced by a smooth interaction probability. The two entities are considered to diffuse freely during the timestep \u0394t thereby producing a probability PAB of interaction.We have presented a formal, theoretically sound framework that provides reliable and accurate simulations of the diffusion-reaction process for biological systems. We compared it with the methods of Smoluchowski and its and Bray . In theip > 0.55). It cannot be ruled out, however, that differences will appear for more complex systems. For example, in the context of reversible reactions, recombination effects might be best modelled using a probability based method. Overall, the Andrews and Bray method for simulating diffusion-reaction processes appears robust at low concentration and gradient effects. However, a possible improvement on this method would be the analytical derivation of the radius of reaction for long timesteps, in place of its present approximation. The Andrews and Bray method was consistently computationally more efficient, running up to ~15% faster depending upon the system being simulated.Although differences were expected to appear between the Andrews and Bray and the current approach in certain circumstances , the results indicate that the reactions rates produced by both methods converge. This is thought to be essentially due to the averaging that takes place as the number of interactions increases. Hence the two methods are for practical purposes equivalent (An in depth theoretical analysis of the diffusion-reaction approach in the context of event driven simulations has recently been published by Zon and Wolde . Here agWe have shown that the modified Smoluchowski method provides results that are indistinguishable from those produced using the much more elaborate and realistic model presented here, at a lower computational cost. The Andrews and Bray, radius-based, method thus appears to be the most simple, robust and efficient method for simulating diffusion-reaction processes currently available.The authors declare that they have no competing interests.ALT designed the new methodology and mathematics, PWF helped with implementation and in checking the derivations, PAB provided the initial impetus and supported the project through its different stages. All authors read and approved the final manuscript."} +{"text": "Gardnerella vaginalis and believed to play a critical role in the pathogenesis of BV and its associated morbidities. We hypothesize that novel antibody-based strategies may be useful for detection of VLY and for inhibition of its toxic effects on human cells. Using purified toxin as an immunogen, we generated polyclonal rabbit immune serum (IS) against VLY. A western blot of G. vaginalis lysate was probed with IS and a single band (57 kD) identified. Immunofluorescence techniques using IS detected VLY production by G. vaginalis. In addition, we have developed a sandwich ELISA assay capable of VLY quantification at ng/ml concentrations in the supernatant of growing G. vaginalis. To investigate the potential inhibitory role of IS on VLY-mediated cell lysis, we exposed human erythrocytes to VLY or VLY pretreated with IS and determined the percent hemolysis. Pretreatment with IS resulted in a significant reduction in VLY-mediated lysis. Similarly, both human cervical carcinoma cells and vaginal epithelial cells exhibited reduced cytolysis following exposure to VLY with IS compared to VLY alone. These results confirm that antibody-based techniques are an effective means of VLY detection. Furthermore, VLY antiserum functions as an inhibitor of VLY\u2013CD59 interaction, mitigating cell lysis. These strategies may have a potential role in the diagnosis and treatment of BV.Bacterial vaginosis (BV) is the most common vaginal infection worldwide and is associated with significant adverse sequelae. We have recently characterized vaginolysin (VLY), the human-specific cytotoxin produced by Bacterial vaginosis (BV) is the most common vaginal infection worldwide and is associated with significant adverse consequences including and preterm labor and delivery Lactobacillus species, and overgrowth of other microbes including Gardnerella vaginalis, Bacteroides species, Mobiluncus species, and Mycoplasma hominis. Recent data however, suggest a primary role for G. vaginalis as a specific and sexually transmitted etiological agent in BV, as was initially postulated by Gardner and Dukes in 1955 The pathogenesis of BV remains poorly understood. It is most commonly defined as a pathological state characterized by the loss of normal vaginal flora, particularly G. vaginalis known as vaginolysin (VLY) Our laboratory has recently sequenced and characterized the human-specific, pore-forming toxin produced by We hypothesize that novel antibody-based techniques may be useful for detection and quantification of VLY production. These strategies may represent a substantial improvement in existing methods of BV diagnosis. Furthermore, antibodies generated against VLY may disrupt VLY-CD59 binding, thereby reducing its toxic effects on human cells.The use of human erythrocytes from healthy adult volunteers following verbal informed consent was approved by the Columbia University Institutional Review Board (Protocol IRB-AAAC5641).G. vaginalis strains 14018, 14019 and 49145 were purchased from ATCC. ARG3 is a clinical isolate of G. vaginalis kindly provided by Susan Whittier. All G. vaginalis strains were grown in brain heart infusion supplemented with 10% fetal bovine serum (HyClone), 5% Fildes enrichment (Remel) and 4 ng/ml of amphotericin. Cultures were incubated at 37\u00b0C and 5% CO2.2 in minimal essential medium (Invitrogen) supplemented with 10% fetal bovine serum and 10 \u00b5g/ml ciprofloxacin. Human vaginal endothelial cells were grown in serum free keratinocyte growth media (Invitrogen) with 0.1 ng/ml EGF, 0.05 mg/ml bovine pituitary extract and 0.4 mM calcium chloride Human cell lines were purchased from ATCC. Human cervical endothelial cells were grown at 37\u00b0C and 5% COG. vaginalis 14018 as described CATATGTCGTTGAATAATTATTTGTGG-3\u20325\u2032- GCCGCC) along with the previously described V6 primer E. coli BL21-AI competent cells (Invitrogen) for expression and purification as described The genomic region encoding VLY was amplified from Purified VLY toxin was generated and submitted to Cocalico Biologicals . According to their protocol, adult rabbits were injected with a minimum of 100 \u00b5g antigen mixed with Complete Freund's Adjuvant subcutaneous and/or intramuscularly at multiple sites. Booster doses containing a minimum of 50 \u00b5g antigen mixed with Incomplete Freund's Adjuvant were administered on days 14, 21 and 49. A test bleed was performed on day 56. Prior to the first immunization, serum was collected from each rabbit to serve as negative control.G. vaginalis 14018 was grown to in culture media and bacterial cells were fixed on a glass chamber slide using 4% paraformaldehyde. Non-specific binding sites were blocked using 5% normal donkey serum and 0.2% triton X-100. Pre-immune or immune serum was added to each slide (1\u2236500 dilution) for 1 h at room temperature. Following serial washes with PBS and 0.2% triton X-100, donkey anti-rabbit conjugated to Alexa Fluor (AF)-488 was added for 30 min in the dark with gentle shaking. After washing, chambers were removed from the slide and cover slips were mounted with ProLong Gold antifade with DAPI (Invitrogen). Slides to which no primary antibody was added served as negative controls.G. vaginalis 14018 was grown on an HBT plate and fresh colonies were resuspended in lysis buffer with benzonase nuclease. The lysate was boiled and separated on a 10% polyacrylamide gel. Proteins were transferred to polyvinylidene difluoride membranes, blocked with 5% milk and probed using rabbit polyclonal anti-VLY antiserum . Detection was with HRP-conjugated anti-rabbit IgG (Santa Cruz Biotechnology) and ECL. Membranes probed with pre-immune serum served as a negative control.G. vaginalis were grown on HBT plates, and colonies were scraped and inoculated into 30 ml of liquid media. A 500 \u00b5l aliquot of each culture was obtained every 6 hours for determination of OD600. An additional 1 ml sample from each was pelleted by centrifugation and supernatant stored at \u221220\u00b0C prior to ELISA.Four strains of G. vaginalis culture media were used as standards. Rabbit polyclonal anti-VLY antiserum (diluted 1\u22361000 in blocking solution) was added to each well for 30 min at room temperature. After washing, goat anti-rabbit HRP antibody was added for 30 min. Wells were thoroughly washed and 100 \u00b5l of TMB substrate (Thermo Scientific) was added to each well and plate was incubated in the dark for 15 min. 50 \u00b5l of stop solution (2N sulfuric acid) was added to each well and OD450 determined.Immuno-96 MicroWell plates (Nunc) were coated with anti-pneumolysin antibody diluted 1\u2236500 in coating buffer and incubated at 4\u00b0C overnight. Wells were washed with PBS and 0.05% Tween 20. Non-specific binding sites were blocked using PBS with 10% fetal bovine serum for 1 h. Supernatants (100 \u00b5l) were added to each well and plates were incubated at room temperature for 2 h. Known concentrations of recombinant VLY toxin diluted in Human blood was obtained by venipuncture, and erythrocytes were immediately isolated by centrifugation and repeated washing in sterile HBSS. A 1% solution of packed erythrocytes in sterile PBS was prepared and added to a 96-well polystyrene V-bottomed plate (100 \u00b5l/well). Hemolysis assay was performed as described 2. Supernatant was removed and the concentration of lactate dehydrogenase was determined using a commercial kit (Roche) according to the manufacturer's instructions.24-well plates were seeded with VK2 or HeLa human epithelial cells in appropriate media and grown to >90% confluence. 12 hours prior to use, HeLa cells were weaned from serum. Recombinant VLY toxin diluted in media (10 \u00b5g/ml) or vehicle control was added to each well. Where indicated, toxin was preincubated with pre-immune or immune sera for 30 min at 4\u00b0C prior to use in the assay. The plates were incubated for 45 min at 37\u00b0C and 5% COData were expressed as mean\u00b1SEM and compared using one-way analysis of variance (ANOVA) with Tukey post-test for comparison of individual groups. G. vaginalis 14018 revealed a single band using polyclonal immune serum as the primary antibody were inoculated into liquid media and bacterial growth curves were generated as determined by optical density (600 nm). All strains of G. vaginalis grew at similar rates in liquid media and vaginal epithelial cells (VK2). VLY-mediated lysis of both HeLa cells had both high negative and positive predictive values for the diagnosis of BV. While these molecular based diagnostic strategies are promising, the required expertise, laboratory resources and expense limits their use in the primary care setting.A potential role for novel, molecular based techniques for the diagnosis of BV has recently emerged. Importantly, preliminary studies evaluating these PCR-based strategies have provided additional evidence for G. vaginalis through the detection of its pore-forming toxin VLY. The ELISA based assay in particular, is sensitive, robust and directly correlates with the concentration of G. vaginalis, reported to be an independent predictor of BV and subsequent preterm delivery We demonstrate here that antibody-based techniques are an effective means of identifying The human-restricted activity of VLY represents a barrier to the study of pathogenesis and candidate therapeutic strategies. Disruption of the interaction of VLY with its host cell receptor, human CD59, may represent a novel approach to the treatment of BV. We demonstrated that polyclonal immune serum functions to inhibit the VLY-CD59 interaction, thereby reducing its toxic effects on a variety of human cell lines. These finding may serve as a preliminary basis for in vivo studies investigating a potential role for immunotherapy in the management of women with BV and the development of vaccine based strategies for disease prevention."} +{"text": "From May through June 2001, an outbreak of acute gastroenteritis that affected at least 200 persons occurred in a combined activity camp and conference center in Stockholm County. The source of illness was contaminated drinking water obtained from private wells. The outbreak appears to have started with sewage pipeline problems near the kitchen, which caused overflow of the sewage system and contaminated the environment. While no pathogenic bacteria were found in water or stools specimens, norovirus was detected in 8 of 11 stool specimens and 2 of 3 water samples by polymerase chain reaction. Nucleotide sequencing of amplicons from two patients and two water samples identified an emerging genotype designated GGIIb, which was circulating throughout several European countries during 2000 and 2001. This investigation documents the first waterborne outbreak of viral gastroenteritis in Sweden, where nucleotide sequencing showed a direct link between contaminated water and illness. Norovirus, family Caliciviridae, which includes a large number of genetically related strains associated with acute gastroenteritis. Longitudinal surveys have shown that caliciviruses and especially noroviruses are common causes of nosocomial and community-associated outbreaks of acute gastroenteritis worldwide were identified in both stool and water samples.An outbreak of acute gastroenteritis occurred in a combined activity camp and conference center in Stockholm County from May to the end of June 2001. During the summer, the center caters to both overnight guests and daytime visiting groups. A separate cafe for outside visitors to the nearby beach is also on the premises. Environmental and microbiologic investigations were conducted to determine the source of the outbreak and implement control measures to stop the outbreak and prevent similar situations in the future.The municipal environmental health unit was first contacted on June 12. The facilities were inspected, and water and food samples were collected. On June 15, the Stockholm County Council Department of Communicable Disease Control and Prevention was contacted, and the premises were reinspected on June 25 and July 3. Additional water samples were taken on several occasions during June and July.Salmonella, Shigella,Campylobacter, and Yersinia. Ten water samples were examined for fecal coliforms, total coliforms, fecal streptococci, and sulphite-reducing clostridia. Seven food products were examined for aerobe microorganisms, enterobacteriaceae, enterococci, fecal coliforms, Salmonella,Bacillus cereus, Clostridium perfringens, coagulase-positive staphylococci, yeast, and mold. Approved standard laboratory methods were used for all bacteriologic investigations.A total of 11 stool specimens were collected (2 from staff and 9 from visiting guests) and cultured for bacterial enteropathogens, including Stool samples were examined for norovirus by electron microscopy and reverse transcription\u2013polymerase chain reaction (RT-PCR), as previously described and n13 (5\u2032-CTT CAG ANA GNG CAC ANA GAG T-3\u2032). These primers yield a 234-bp product.The PCR products from two human and two water samples were sequenced. The samples were sequenced from both directions by using primer pair n12/n13 (water samples) and primer pair JV12/JV13 (patient samples) by ABI Prism BigDye Terminator Cycle Sequencing Ready Reaction kit on an ABI 310 automated sequencer. Sequences from prototype strains of caliciviruses from the GenBank database were aligned with the sequences from patient and water samples. Programs from the PHYLIP program package were used to construct the phylogenetic trees. SEQBOOT (NIH) was used for bootstrap resampling to produce 100 different datasets from the aligned sequences. From these datasets, phylogenies were estimated by DNAMLK (NIH). CONSENCE (NIH) was used to construct a consensus tree from the obtained data and to obtain bootstrap values. The tree was drawn with Treeview . The nucleotide sequence accession number assigned by GenBank is AY240939.The activity camp, conference center, and nearby cafe were supplied with ground water from their own private wells, located at the premises. Six months before the outbreak, they had started to use water from two newly drilled wells located within 20 m of each other. Only chemical parameters had been analyzed before the new wells were put in use. The water from both wells was held in a common reservoir and was not disinfected before distribution. According to personnel at the camp, the wells were approximately 80 m deep, and the soil layer was 18 m at the location of the wells. A third well was drilled at the same time and located close to the other two but was not put in use. Previously, water had been obtained from an old well located further away from the facilities. Since this old well had limited capacity, and sometimes its water was not potable, new wells with enough capacity to fulfill increased demands had been drilled. For practical and economical reasons, the new wells had been placed closer to the center facilities.Sewage from the camp was connected to the community system and was transported to the nearest sewage treatment facility. The sewage pipes were old, and personnel reported that on several occasions problems with the capacity of the system had occurred. In April 2001, a blockage of the overflow in the low-pressure-system well, located near the kitchen facilities, occurred, and sewage had spilled out on the ground. On this site, located approximately 100 m from the ground water wells, the rock was covered by only 1\u20132 m of soil. Sewage had also overflowed on the ground near the kitchen in the autumn 2000 because of a stoppage in the sewage pipeline connection to the community system.Approximately 200 people contracted gastroenteritis after consuming tap water. They had clinical symptoms of vomiting, diarrhea, abdominal pain, and fever (mostly a combination of these symptoms). Duration of symptoms varied from several hours to 2 to 3 days. The first known cases of illness occurred in a group of adults participating in a 1-day conference on May 31. Of 16 persons , 8 became ill (attack rate 50%) with gastrointestinal symptoms. Nearly 2 weeks later (June 9\u201310), a school class with 28 pupils (8\u201313 years of age) arrived for an overnight stay; approximately half became ill (attack rate 50%) with similar symptoms. The following day (June 10), the first participants of a sport-training camp arrived. The camp lasted for 10 days, during which a total of 150 children (9\u201312 years of age) and 20 adults stayed at the facilities in three overlapping periods. The first cases of illness in this group occurred the day after arrival; approximately 100 persons became ill (attack rate 58%). During the next 2 weeks, several more guests and visiting groups reported illness after visiting the center; some of these persons had not eaten but had just drunk the center\u2019s tap water. Two of these groups were children (8\u201313 years of age); the attack rate in both groups was 40%. The outbreak was not controlled until the facilities closed for >1 week in the end of June. Some of the personnel working at the center also reported gastrointestinal symptoms, including one of the kitchen personnel, who became ill on June 13 and was taken off duty.On the first visit, general recommendations regarding kitchen hygiene and cleaning of the environment were given. When the results of the first water samples were ready, additional recommendations on boiling all water used for drinking and food preparation were given. At the same time, the environment was thoroughy sanitized. In spite of these measures, new cases continued to occur, so the facilities were closed for >l week at the end of June to interrupt possible continuous transmission among guests. After this measure, no new cases occurred. Different alternatives to prevent similar situations in the future were discussed, and the decision was made to close the wells and connect to the municipal water supply.Salmonella, Shigella, Campylobacter, or Yersinia, nor were any viruses other than calicivirus found by electron microscopy. Of the 11 samples examined by norovirus-specific PCR, 8 had an amplified PCR product of the expected size. No foodborne pathogens were found in any of the food items investigated. The first samples were collected from tap water in the kitchen on June 12 and water collected from the water works on June 18 showed strong indication of fecal contamination (None of the stool samples collected from the two staff or nine visitors were positive for mination . Samplesmination . Water smination . The watmination and had We describe an epidemiologic and microbiologic investigation of a waterborne outbreak in which at least 200 persons became ill after staying at a combined activity camp and conference center in the Stockholm area. A large number of daytime visitors to the beach and nearby cafe may also have become ill, so the actual number of cases has likely been underestimated. The visitors in different groups did not eat the same food items, and some visitors did not eat any food. Several of the short-stay visitors consumed only camp tap water, which was fecally contaminated. The source of illness was drinking water obtained from ground water wells that had been contaminated by sewage. Person-to-person transmission and transmission through contaminated surfaces probably contributed to the rapid spread among the overnight visitors. While no pathogenic bacteria were found in water or stool samples, norovirus belonging to genogroup II with identical nucleotide sequence in the polymerase region was obtained from both stool and water samples. The strain was identical to strain Gothenburg, previously identified in Sweden and belonging to the emerging genotype cluster GGIIb. These strains have circulated in several European countries during 2000 and 2001 (The drinking water was obtained from deep ground wells close to the cafe. Before the outbreak, this cafe had had problems with low pressure in its well, which caused blockage of the sewage system. As a consequence sewage spilled out and lead to contamination of the environment. At the contamination site, the soil was only 1\u20132 m deep, and cracks in the rock may have facilitated migration of microorganisms from the sewage to the ground water. Norovirus can migrate through soil and contaminate well water and cause gastroenteritis outbreaks (One possible explanation for the protracted duration of the outbreak could be a continuous leak from the sewage system, which would have caused persistent contamination of the environment. The ill persons staying at the facilities might have contributed to increased viral load in the sewage, and problems with the sewage collection system would then have further aggravated contamination of the water supply. Another possibility was that the water initially caused the outbreak, but person-to-person spread contributed to the continuous transmission.The low infectious dose of norovirus readily allows transmission through environmental contamination and aerosols. Boiling the water used for drinking and food preparation was recommended. Since the risk for transmission through aerosols generated when showering with possibly contaminated water is not well established, no recommendations were made in this regard. Another problem was how to decontaminate bed linen and other fabrics. Washing at high temperatures is the recommended procedure to eliminate viral contamination. However, if the water used for washing is contaminated, the rinsing process may lead to recontamination of the fabrics. We recommended boiling or heating water for washing to >90\u00b0C in the presence of detergents.This outbreak illustrates some problems related to private water supply. In Sweden, approximately 15% of the population has a private water supply, and the extent of gastrointestinal illness related to water is not clearly identified. Problems with person-to-person transmission of noroviruses are well known; however, risks related to exposure through contact with contaminated water and environment through vomit and aerosols are not well established.In summary, detecting identical virus in both drinking water and stool specimens from ill persons strongly indicated that norovirus was the principal pathogen of this outbreak. Nucleotide sequence analysis identified a norovirus designated GGIIb ("} +{"text": "As wireless sensor networks are usually deployed in unattended areas, security policies cannot be updated in a timely fashion upon identification of new attacks. This gives enough time for attackers to cause significant damage. Thus, it is of great importance to provide protection from unknown attacks. However, existing solutions are mostly concentrated on known attacks. On the other hand, mobility can make the sensor network more resilient to failures, reactive to events, and able to support disparate missions with a common set of sensors, yet the problem of security becomes more complicated. In order to address the issue of security in networks with mobile nodes, we propose a machine learning solution for anomaly detection along with the feature extraction process that tries to detect temporal and spatial inconsistencies in the sequences of sensed values and the routing paths used to forward these values to the base station. We also propose a special way to treat mobile nodes, which is the main novelty of this work. The data produced in the presence of an attacker are treated as outliers, and detected using clustering techniques. These techniques are further coupled with a reputation system, in this way isolating compromised nodes in timely fashion. The proposal exhibits good performances at detecting and confining previously unseen attacks, including the cases when mobile nodes are compromised. In all of the applications, it is mandatory to maintain the integrity and the correct operation of the deployed network. Furthermore, WSNs are often deployed in unattended or even hostile environments, making their securing even more challenging. In addition, the trends in the recent past are to include mobile nodes, since this can make the WSN more resilient to failures, reactive to events, provide better coverage of the monitored area, and able to support disparate missions with a common set of sensors. However, mobility additionally complicates the security issue.The development of Wireless Sensor Networks (WSNs) was mainly motivated by military applications, such as control and surveillance in battlefields, but over time their deployment has been introduced to other areas, WSNs consist of huge numbers of sensor nodes, and since this number is huge, the nodes have to be very cheap. This further implies that they possess very limited power and computation resources, small memory size and limited bandwidth usage. Furthermore, the incorporation of any tamper-resistant hardware would assume unacceptable costs. All of this makes the security of these networks very challenging, as the resource limited devices cannot support the execution of any complicated algorithms. Moreover, WSNs use a radio band that is license-free, so anybody with appropriate equipment can listen to the communication. Finally, due to their deployment in areas that are difficult to reach makes them prone to node failures and adversaries.On the other hand, profound analysis of the state of the art has allowed us to identify the main issues of the existing solutions, which are their limited scope of detection, as the majority of them can detect only previously seen attacks, and the fact that any adjustment has to done by humans, which cannot be done in a timely fashion due to the deployment of the nodes in hard to reach areas. In order to overcome these issues, we have proposed an approach based on anomaly detection that is able to detect a wide range of attacks, including the previously unseen ones, without the necessity to have any previous knowledge on the attacks and their way of operating. Attacks are treated as data outliers, and since outliers are defined as something different from the normal, we can classify our approach as an anomaly detection one. Thus, the basic premise of this approach is that the attacks are deviations from normality. However, not all deviations from normality are attacks, but we believe that they have to be reported and examined further. For this reason, in this work we only provide a first reaction to anomalies, which is their isolation, but it is assumed that the base station has additional technique to decide whether an anomaly can be attributed to an attack, or not. However, this is out of the scope of this work.The existing anomaly detection solutions mainly look for the deviations in the values of the parameters that capture the properties of known attacks, which means that they use so-called numerical features. Hence, their possibilities to detect unknown attacks are limited, since it is hard to define the numerical features of the unknown attacks. In order to overcome this issue, we have proposed a machine learning solution for anomaly detection along with the feature extraction process that does not capture the properties of the attacks, but rather relies on the existing temporal and spatial redundancy in sensor networks and tries to detect temporal and spatial inconsistencies in the sequences of sensed values and the routing paths used to forward these values to the base station. In this work we further propose a special way to integrate mobile nodes into this approach, given that mobility is a big issue in anomaly detection, as it can lead to observation data that have long range dependency and in this way increase its difficulty. Moreover, it is greatly unexplored and untreated subject in the current state of the art.The data produced in the presence of an attacker are treated as outliers, and they are detected using clustering techniques. The techniques are further coupled with the reputation system, which provides implicit response to the attackers, as the compromised nodes get isolated from the network. The proposal has been tested in the presence of the attacks that were unknown during the training, exhibiting good performances at detecting and confining these attacks. To summarize, the objective of this work is to detect unknown attacks in WSNs that contain both static and mobile sensor nodes, providing special treatment for the latter ones given their dynamic nature, and provide initial response to malicious nodes. The main contribution of this work is the proposal for special treatment of mobile nodes, which provides their efficient incorporation in the existing approach, while maintaining high performance level.The rest of the work is organized as follows: Section 2 gives more details of the state of the art solutions. Section 3 details the proposed solution, while Section 4 provides its evaluation. Finally, conclusions are drawn in Section 5.2.i.e., known attacks and their variations. In order to detect new attacks, they need to be adjusted by humans.A number of custom IDSs for sensor networks have been proposed. Some representative solutions are given in ,2. HowevRecently a few solutions that deploy machine learning techniques have appeared ,4. Amongi.e., those that are known to change under the influence of an attacker, or are known to be weak spots. This is their major deficiency, as relying on these features only known attacks or their variations can be detected. Furthermore, it assumes that an attacker can exploit only the known vulnerabilities, but general experience is that vulnerabilities are detected after being exploited by an adversary. Some of them assume that the feature sets can be expanded . For example, the third element in the sequence \u20261 0 0 1 1\u2026 for n = 3 participates in three n-grams: 100, 001 and 011. However, if the attacker changes this value into 1, the sequence becomes \u20261 0 1 1 1\u2026, in which case the third element participates in these n-grams: 101, 011 and 111. This results in decreased occurrences of the n-grams 100 and 001, while the occurrences of the 101 and 011 become increased . In total, the occurrence of four n-grams is changed. For these reasons, if the attacker introduces errN change in the sample of the size sampleN, the value of \u0394D will range between 0 , and the value that corresponds to the case when the effects of each change are completely uncorrelated, so they sum together, which is given with the following formula:Now we will see how the changes introduced by the attacker affect the feature values. Bearing in mind that each sensed value or routing hop participates in n features, where n is the size of the n-gram, if the attacker changes one value, the values of maxD we use the next formula:\u03b1 = 1 \u2013 1/\u03c1, \u03b2 and k are constants defined in the design process (the specific meaning of both will be explained later in this section) and \u03c1 is the coefficient of total correlation between the n-grams. The value of F(\u03c1) is \u03b2 for \u03c1 = 0, (the reason for this will be explained in the following), and 1 for \u03c1 = 1.Thus, having in mind the correlation of the n-grams, in order to model this change that ranges from 0 to 1, X2X, \u2026, kX, the total correlation C is given by the following formula:H(iX) is the information entropy of variable iX, while H is the joint entropy of the variable set {1X, 2X, \u2026, kX}. In our case, the variables are the extracted n-grams. For the sake of calculating the above formula, their distribution can be approximated either as a common distribution depending on the purpose of the deployed sensor network, or using the historical data sensed by the network.The coefficient of total correlation expresse\u03b2, we have to take into account that the higher the value of \u03b2 is, the function becomes closer to its asymptotic function F(\u03c1) = 1. Thus, the effect of \u03c1 becomes smaller. Similar stands for the value of k. As k \u2192 0, the function becomes closer to the same asymptotic function. In the opposite case, as k \u2192 \u221e, the function reaches its asymptote: F(\u03c1) = 0 for \u03c1 < 1, F(\u03c1) = 1 for \u03c1 = 1. In both cases the effect of \u03c1 becomes less significant.Regarding the value of Finally, we get the following formula:sampleN, n and thf. Lower characterization periods (sampleN) and the threshold on one side and higher n on the other give us the opportunity to detect the attacker even if he introduces very few changes. However, this can also result in higher false positive rate, so a tradeoff between higher detection and lower false positive rate has to be established. This depends on many factors, such as the application of the deployed WSN or the existing redundancy. Also, the values of both \u03b2 and k indirectly affect on this value through F(\u03c1). As the value of \u03b2 increases or the value of k decreases, the value of F(\u03c1) for the same \u03c1 increases, which further decreases the value of errminN. In opposite cases, as the value of \u03b2 decreases or the value of k increases, the value errminN will increase.In the previous equation we have the following degrees of freedom: \u03b2. It derives form the constraint that the maximal possible value of errminN is equal to sampleN. For the same reason, F(\u03c1) has to be different than 0 for \u03c1 = 0 . This results in following:The previous formula also helps us to define the minimal value of 3.6.vice versa. We further advocate avoiding any kind of interaction with the low-reputation nodes: to discard any data or request coming from these nodes or to avoid taking them as a routing hop. In this way, compromised nodes remain isolated from the network and have no role in its further performance. After this, additional actions can be performed by the base station, e.g., it can revoke the keys from the compromised nodes, reprogram them, etc.Every sensor node is being examined by agents that execute one of the algorithms for detecting attacks, which reside on nodes in its vicinity and listen to its communication. The agents are trained separately. The system of agents is coupled with a reputation system where eathf is taken to be 1 for the following reasons. Considering that the attacks will often result in the creation of new n-grams, it is reasonable to assume that the extracted vector in the presence of attackers will not be a subset of any vector extracted in normal situation, thus the distance will never be lower than 1. We further define two reputation values, repQE and repMD based on the previously defined QE and MD values and afterwards joint reputation rep used for updating overall reputation based on these two values:if (QE < 1) { repQE = 1; }else { repQE = 1 - QE/2; }\u2003if (MD < 1) { repMD = 1; }else { repMD = 1 - MD/2; }\u2003In this work the reputation is calculated in the following way: if (QE > 1) { rep = repQE; }else { rep = repMD; }\u2003The value (rep) for updating overall reputation is calculated in the following way:if (last_rep[node] > threshold) {new_rep[node] = last_rep[node] + rep + log(1.2 * rep); }else {new_rep[node] = last_rep[node] + c_limit * (rep + log(1.2 * rep)); }There are two functions for updating the overall reputation of the node, depending whether the current reputation is below or above the established threshold that distinguishes normal and anomalous behavior. If the current reputation is above the threshold and the node starts behaving suspiciously, its reputation will fall quickly. On the other hand, if the reputation is lower than the established threshold, and the node starts behaving properly, it will need to behave properly for some time until it reaches the threshold in order to \u201credeem\u201d itself. The first objective is provided by the function x + log(1.2x). Finally, the reputation is updated in the following way:c_limit, which takes values lower than 1 and its purpose is to limit selective behavior of a node by decreasing the reputation growth if the reputation value is below the threshold. Very low values of this coefficient obligate nodes to behave properly most of time. If the final reputation value falls out from the range, it is rounded to 0 if it is lower than 0 or to 1 in the opposite case. The threshold value can be set to the middle of the reputation value range (50 in our case) at the starting point. However, this value depends on many different factors. One of the most important factors is risk, and the threshold value is proportional to it: if the operation in the network (or in some of its parts) is critical, the threshold value should be higher, and vice versa. Thus, a process that evaluates risk should be able to update the threshold value.The second objective is provided by the coefficient QE value as well. On the other hand, the spatial coherence should not detect any anomalies. Thus, the final reputation will fall only if both spatial and temporal algorithms detect anomalies. In the opposite case, its reputation will not change significantly. This is implemented in the following way:if {if (space_rep < threshold) {\u2003result = value_rep;\u2003\u2003} else { result = 1 - value_rep; }\u2003} else {result = value_rep; }\u2003where value rep is the reputation assigned by the algorithms for temporal characterization and space rep is the reputation assigned by the algorithms for spatial characterization. On the other hand, as mentioned in the previous text, in the situations such as the data coming from a node exhibits large variations, temporal inconsistencies are not likely to be detected. However, spatial inconsistencies are very likely to be detected. Thus, spatial inconsistence is sufficient in order to raise an alarm.However, if during the testing of temporal coherence, we get normal data different from those that the clustering algorithms saw during the training, it is possible to get high nBad and the number of its appearances in good routes be nGood. Finally, if nGood is greater than nBad, the node keeps its reputation value, and in the opposite case, it is assigned the following reputation value:Concerning the detection of routing protocol anomalies, the explained approach can tell us if there is something suspicious in routing paths of a certain node. Yet, in order to find out the nodes that are the origin of the attack, we need to add one more step. In this second step, if the reputation of the routes calculated in the previous step is lower than the established threshold, the hops that participated in the bad routes will be added to the global list of bad nodes, or if they already exist, the number of their appearance in bad routes is increased. The similar principle is performed for the correct nodes. For each node, let the number of its appearances in bad routes be In this way, as the bad node spreads its malicious behavior, its reputation will gradually decrease.3.7.etc. It is not easy to guess the optimal parameters a priori, and in our case an additional problem is the impossibility of human interaction. Moreover, in the case where an agent resides on a compromised node, it is possible for the attacker to compromise the agent as well. We consider that additional security measures that protect the agent from the host (and vice versa) are taken, such as those proposed in [Given the distributed nature of WSNs, the detection should be organized in a distributed manner as well. In our approach detectors are implemented as software agents and they reside on physical nodes. It is important to notice that machine learning techniques have many parameters that should be set from the start, e.g., duration of training, size of the lattice in the case of SOM, crossover and mutation probabilities in the case of GA, posed in , so agen\u03b1 stands for the number of correct decisions made by the detector, while \u03b2 stands for the number of incorrect ones. The voting system decides whether a response is right or wrong based on majority voting.In order to overcome these issues, we introduce agent redundancy, where more than one agent monitors the same node. Each physical node may contain more than one agent. In the beginning we have a group of agents that implement one of the proposed algorithms with different parameter settings. Every node is being examined by an agent that resides on another node in its vicinity and which promiscuously listens to its communication. Each of the agents is trained separately. Final decision can be made either applying majority voting, or a weighted sum, where each weight depends on the \u201cquality\u201d of each agent. A simple and efficient way of calculating this quality could be to introduce agent reputation . This reputation can be calculated using beta function , which h3.8.i.e., they should be able to sense the environment, continue sending their own sensed values and be used as a routing hop in forwarding sensed values of other nodes as well. Due to their special nature, these nodes require special treatment in the proposed detection system.Mobile nodes introduce additional alterations in the system, making the detection process more complicating. However, the presence of mobile nodes should not affect the proper functioning of the WSN, i.e., it has not changed its position significantly, which can be concluded by the base station if it still uses similar group of nodes to route its data, or if it starts using completely new routes, the old model has to be discarded and a new model has to be established and trained for the same node. This goes for both models: the one based on sensed values, i.e., if a node has changed its position significantly, it is very probable that sensed values will be different, thus a model has to be established by performing the training with new data, and also the one based on routing information.In order to minimize the disruption of the detection system, we propose the following principle: if the nodes encounter with a new node in their area, they will ask the base station about its reputation and continue the interaction assuming the existing reputation value. Concerning a mobile node that has changed the position, we distinguish two possible situations: the node remains in the same area, Regarding the nodes whose routes will be changed with the introduction of a new node that behaves properly, according to the presented model and the adopted distance function, it is not possible to introduce significant change so as to raise doubts, thus it is not necessary to perform the re-training. However, if more new nodes appear in the routes, it is advisable to perform the re-training (with both old and new data) in order to avoid false positives.4.4.1.The proposed approach has been tested on a simulator of sensor networks developed by our research group and designed using the C++ programming language. We have decided to design a simulator mainly because there is no available testbed for security applications in sensor networks. Attacking recorded data from a testbed does not significantly differ from the simulation. What's more, the available testbeds for wireless sensor networks contain a relatively small number of sensors (100 at most), in which case the data obtained from our simulator are more complex (simply because there are more sensors). For these reasons, we believe that until a testbed for security applications in WSNs appears, a simulator is a better choice for testing security applications. We further evaluated two well-known WSN simulators, ns-2 and Casti.e., nodes can move randomly in any direction towards a random destination, with the restriction of the maximal distance between the current position and the destination. The maximal distance in our case is 20% of the distance between the current node position and the base station in order to avoid the situation where the node ends up at the position of the base station.The network is organized as clusters of close sensors where each group has its cluster head, as is often done in real networks in order to reduce computational overhead and energy consumption. Cluster heads are the only sensors that can participate in the communication between different clusters and also in routing. The mobility model is random-based, i.e., fabricated, or impersonated from other legitimate nodes, i.e., stolen IDs. Added malicious nodes send random values that may or may not coincide with the values sent by the original good nodes. Since in this work we are dealing with unknown attacks, clustering algorithms are trained with data that have no trace of attacks. The performance of the approach when attacks are present during the training can be found in our previous work [In this work we will present the results based on the Sybil attacks , where tous work ,14,16. TThe proposed algorithm has been tested on the presented simulated sensor network that contains 40 sensor nodes that can be placed in 100 different positions. The network simulates a sensor network for detecting presence in the area of application. The groups for spatial characterization are formed in the following way: close sensors that should give the same output are placed in the same group.The duration of the experiment is 1000 time ticks. One time tick in simulator is the period of time required to perform the necessary operations in the network, and it is equivalent to a sampling period, or time epoch in sensor networks. In the following we will present results in different scenarios varying the attack strength.4.2.In order to illustrate the performance of the algorithm, in the first experiment the Sybil node is static (it is added at the position 26) and it has the valid ID of a mobile node. In In the following experiments we will gradually increase the number of compromised nodes, increasing as well the number of attacked mobile nodes. In i.e., achieve 100% detection rate) if up to 52% of the nodes are malicious. It is also important to mention than in all the experiments the false positive rate was 0%.As we can observe, with the proposed approach we can detect the attack if up to 80% of the nodes are malicious, and completely confine it (In 5.In this work we have proposed a machine learning based anomaly detection approach for detecting unknown attacks in wireless sensor networks. We have also proposed a way to integrate mobile nodes in the approach, which is the main novelty of this work. The attacks are treated as data outliers, and we have designed clustering algorithms for outlier detection. The algorithms are further coupled with a reputation system, which provides implicit response to attackers, as low reputation nodes remain isolated from the network. Our experiments confirm that the approach is capable of detecting and completely confining attacks that were unknown to the algorithms during the training, with no false positives, and even in the cases mobile nodes have been attacked. We were able to achieve 100% detection rate of up to 52% of the nodes were malicious, and detect the presence of the attack if up to 80% of the nodes are malicious.In the future we plan on broadening the scope of attacks our approach can detect by addressing attacks that compromise mobility patterns, which can make the approach helpful in detecting attacks in cellular networks or in detection of mobile intruders in the monitored area in surveillance applications ."} +{"text": "Security in wireless sensor networks is difficult to achieve because of the resource limitations of the sensor nodes. We propose a trust-based decision framework for wireless sensor networks coupled with a non-deterministic routing protocol. Both provide a mechanism to effectively detect and confine common attacks, and, unlike previous approaches, allow bad reputation feedback to the network. This approach has been extensively simulated, obtaining good results, even for unrealistically complex attack scenarios. These semantic enhancements can be used by other applications running on top of our system to make decisions.Security concerns are key issues in ambient intelligence (AmI) since its earliest inception . Many reFor example, Brumley and Boneh developeThree factors contribute to make security in wireless sensor networks a very difficult problem: 1) many nodes in the network have very limited resources; 2) pervasiveness implies that some nodes will be in non-controlled areas and are accessible to potential intruders; 3) all these computers are globally interconnected, allowing attacks to be propagated step by step from the more resource-constrained devices to the more secure servers with lots of private data.Usually, security issues are addressed, in a similar way to services in a network of general-purpose computers, by adding an authentication system and encrypted communications. First, the resource limitations make the embedded computers especially vulnerable to common attacks.In previous work , we demoApplications built on wireless sensor networks have to live with the fact that privacy and integrity cannot be preserved in every node of the network. This poses restrictions on the information a single node can manage, and also in the way the applications are designed and distributed in the network.Of course, the inherent insecurity of embedded systems should not lead us to not try hard to avoid compromises. We should guarantee that a massive attack can not be fast enough to avoid the detection and recovery measures to be effective. Therefore we should design the nodes as secure as the available resources allow.Redundancy. A wireless sensor network usually has a high degree of spatial redundancy (many sensors that should provide coherent data), and temporal redundancy , and both can be used to detect and isolate faulty or compromised nodes in a very effective manner.Continuous adaptation. Wireless sensor networks are evolving continuously, there are continuous changes of functional requirements , nodes appear and disappear continuously and therefore routing schemes change, low batteries force some functionality to be migrated to other nodes, etc.In spite of the disadvantages of wireless sensor networks from the security point of view, they provide two advantages for fighting against attacks:In this article we propose a more secure approach to the design of applications built on a wireless sensor network by exploiting these two properties. In Section 2 we review some of the most relevant previous approaches. Section 3 describes our approach in detail. In Section 4 we review some relevant attacks, the countermeasures that have been proposed previously, the requirements that these threats impose to our design strategy and demonstrates how this approach can detect and confine them. In Section 5, some experimental data is shown and discussed. Finally, in Section 6, we draw some conclusions.2.Many research works have dealt with the problem of security in wireless sensor networks. Butty\u00e1n and Hubaux , 11 propet al. [Marti et al. proposedet al. , this scMany approaches to secure wireless sensor networks use encryption keys, or need user authentication and/or authorization. Stajano and Anderson accept eThe CONFIDANT solution, proposed by Buchegger and Le Boudec is a gooi should give more weight to the direct observations made by it than the evidence obtained from other nodes. Furthermore, the evidence from different nodes should be weighted on the basis of their respective reputations. The beta reputation system [Intuitively, a node n system and recen system base on n system , based oEither because they hold encryption keys or for other reasons, many approaches demand that the nodes be tamper-proof secure . But thiand that the communications between nodes with limited resources are not secure. Our approach compensates those drawbacks by taking advantage of redundancy, temporal and spatial.In an early design stage we decided not to depend on the hardware being tamper-proof. In fact, it is our assumption that it isn't In general, most of the studied architectures provide security (by just preventing attacks or by simultaneously detecting attacks and providing countermeasures) in the routing protocol, at network level. Our security infrastructure is designed for intelligent environments, so it takes advantage of the environment and uses information from the application layer.3.3.1.We focus on the development of secure applications in future wireless sensor networks, where many sensors provide data about observable magnitudes from the environment, and many actuators let the system act on the state of the environment.Data: Symbols. It simply exists and has no significance beyond its existence (in and of itself).Information: Data that is processed to be useful; provides answers to \u201cwho\u201d, \u201cwhat\u201d, \u201cwhere\u201d, and \u201cwhen\u201d questions.Knowledge: Application of data and information; answers \u201chow\u201d questions.Intelligence : Appreciation of \u201cwhy\u201d. It is the process by which new knowledge is synthesized from the previously held knowledge.Following the Ackoff taxonomy for the content of the human mind, we classify the content of the \u201cambient mind\u201d into four categories:The main characteristic of an intelligent ambient is the semantic enrichment of environment based on the processing of data obtained from the environment using a sensor network. This \u201cambient mind\u201d enhances the semantics of the environment by adding meaning to the objects. The objects are conscious of the \u201cwho\u201d, \u201cwhat\u201d, \u201cwhere\u201d, \u201cwhen\u201d, \u201chow\u201d, and \u201cwhy\u201d.Data is obtained by sensor nodes, but as they are not trusted, most of the remaining processing should be done in secure servers so that confidentiality attacks do not succeed (note that data has no meaning by itself). Data is sent to servers where it is processed to generate information, and then knowledge, and then understanding, and then new meaning, which is returned back to the environment. Individual nodes may be insecure, but the system should always continue its function of semantic enhancement. Moreover, attacks of individual nodes should not affect the decisions based on data from the environment. These requirements are achieved by perusing redundancy to discard data from the compromised nodes, and by changing the network structure and behavior at a speed that is fast enough to prevent a chained attack to spread.3.2.We consider the network composed of two kinds of nodes: wireless nodes and servers.Wireless nodes. They provide data to the network to enable decisions to be made. In our model, decisions are made primarily in secure servers, and therefore the main task of these wireless nodes is sending data to the servers. The more data is sent to the servers, the more redundancy can be used to discard bad data and to detect failures or intrusions. But also, the more data is sent, the more bandwidth is used and the more energy is consumed, so we have to reach a compromise. There are many wireless nodes in an intelligent ambient, so they have to be inexpensive, what usually means very limited resources, battery-powered, not maintained and hence insecure; an intruder may have physical access to them.Servers. They receive data from sensors and make decisions in order to reach the applications objectives. These decisions may imply to act in the environment and therefore they have to be secure. Servers are usually well maintained, wire-connected and their resources are not usually constrained at all.3.3.at.We assume that servers are secure and reliable. The number of wireless nodes is assumed to be huge compared to the number of servers. Due to being physically accessible and resource-constrained, wireless nodes are considered to be vulnerable. We assume an intruder can seize control of any wireless node in a minimum time There is a working service location system in the network, and it is secure and reliable. This article will not address the problems of deployment and operation of this service. We assume that every node in the network knows how to reach any particular service.As redundancy is good for detecting and isolating attacks, any device providing useful information should be welcomed. Therefore, we assume that new wireless nodes can be added dynamically to our network without any restriction. Our architecture should assure that a continuous addition of bad nodes will not affect to the global behavior.3.4.Our approach to the previously described threats is based on leveraging the two weapons that we have to detect and resist to attacks and failures: redundancy , and continuous adaptation. Also, we know that individual wireless nodes are vulnerable to attacks, and therefore no important decision should be made by a single node and no significant information should be stored in a single node.We propose a software architecture based on many independent agents with simple and clear responsibilities. The term agent is heavily overloaded and should be defined more precisely. An agent in our system is an independent piece of software that is able to act on your behalf while you are doing other things (they are proactive), and it does this based on its knowledge of your preferences and the context. This knowledge is stored in servers and it is available to the network nodes through the use of passive services.Individual sensor nodes are not trusted by default, and therefore the notion of trust is built dynamically by comparing a sensor with its neighbourhood. For this reason, every agent that needs to take into account data coming from sensor nodes or any derived information uses a trust-based decision framework that is further described below.Sensor agents are the simplest ones. They usually run on wireless nodes and provide measured data of external variables to the network, by sending messages to their routing agents. The message rate depends on the variation rate of the variable being monitored. This message rate should be enough to ensure that data items do not change too fast and therefore temporal redundancy can be used to detect failures or attacks.Each sensor agent is associated to a sensor device and generates a sequence of measurements:As previously stated, there is not a single routing agent for each sensor agent, and this agent decides randomly what routing agent to use for every message.Although they do not consume data from other sensors, they need to maintain a trust table for their routing elements, that will only evolve with reputation information coming from the servers. Unlike in routing elements, the initial trust value for a routing element is positive, and the distribution of messages is uniform between all the routing nodes with positive trust.Actuator agents operate physically on the environment. They are especially critical because: 1) they are usually not redundant, and 2) any operation on them causes a physical effect on the environment. Therefore the nodes running actuator agents should be at least as tamper-resistant as the physical element they control. To ensure that an intruder cannot operate remotely on an actuator, only servers can send operation requests to these agents and they should use robust asymmetric encryption algorithms. As security and processing requirements are higher, these nodes are usually main powered.The data flow goes from sensors to servers and from servers to actuators. There is no feedback from actuators to servers. So if an actuator is attacked, the assailant will not be able to access the other entities in the network.Logically, an actuator works as a passive service, but it also develops a trust model of its environment, which is fed to the servers.Aggregation agents reduce the redundancy by combining several data items using a known aggregation function. The only reason to apply these aggregations is to reduce the amount of data sent to the servers, allowing the system to scale. Trust computation implies also an aggregation of spatial and temporal redundant data that is held in a node.Services are passive elements that can be used by other nodes in the network. They usually run in servers. Some of the services that have important roles for security reasons are: object tracking system, user tracking system, user modeling system, and common sense database.3.5.et al. in [We follow the definitions and beliefs of Boukerch t al. in concerniTo consider a data item to be valid we use two consistency tests. The data item is said to be s-consistent or consistent with the spatial redundancy if it is consistent with the data provided by the majority of sensors that provide measurements of the same variable. For example, for a presence event from a PIR detector to be valid, the majority of nodes monitoring the same area should also detect presence. In this evaluation every sensor is weighted with the trust value the receiving node has about the source node.A second way to discard bad data is to evaluate each data item against temporal data redundancy. Each routing element stores a limited set of previous values for each variable directly routed through itself. The data item is said to be t-consistent if the variation against previous data is normal for that variable. For example, if a temperature value changes drastically and it is not maintained during some time, maybe a routing element has been attacked.vd(t) is the value of the variable v built from neighbor measures and the previous trust value (i\u03c4(t - 1)). i\u03c4(t) symbolize the trust value of the evaluating node on node i. v provided by node i that are stored in this node . vA is an aggregation function that depends on the variable being measured, and it does not take into account data coming from a node with negative or zero trust value. T is also an aggregation function with these properties:i\u03c4(t - 1) is negative, the data item is discarded and no further processing is done for this message .If vid(t) is s-inconsistent and t-inconsistent, it is stored in the local history , but it is not taken into account for trust recalculation.If the new data element If it is s-inconsistent with other sensors' data but t-consistent with previous values of the same sensor, trust on sensor i decreases.If it is s-consistent and t-consistent and current trust is positive, trust increases.Both properties, s-consistency and t-consistency, are dependent on the variable being measured. To model trust and reputation in our agent system, every node in the network maintains a trust table with entries for every relevant neighbor node. When a new node is discovered, the initial trust value is 0. Whenever a new message containing a new measurement of the external variable v arrives, trust on node i is recalculated as follows:As can be seen, trust computation condenses historical information, and therefore it is bad, as we lose redundancy. On the other hand, resources are tightly constrained and we have to reduce storage requirements to a minimum.To avoid some attacks, temporal disappearance means loss of positive trust (not negative). Whenever it appears again, it will get a 0 trust value. There is a second method to feed trust values back from redundancy analysis: reputation messages from the servers.viH represents all the history of data values of the variable v provided by sensor i, and R is another aggregation function. Well-behaved nodes increase their reputation; the reputation of bad-behaved ones decreases. Multiple agents can be running on the trust servers to look for attack evidences in the message history, and proactively reduce reputation values of suspect nodes.From time to time, nodes communicate their trust tables to the servers. This is done at the routing level by adding this trust information to messages that are being sent to the same destination. Servers are not resource constrained by assumption, and therefore they can store all the historical information for future analysis. The adequate combination of all the trust data of a zone generates the global reputation data:Whenever a server decides that it has to act in the environment by modifying trust values for ill-behaved nodes, it broadcasts the reputation information of all the nodes in that zone. This message is repeated from time to time until the data the server receives from that zone is consistent with the global reputation information.A wireless node will never take into account this reputation information unless it has been received from different routers (cluster heads). Thus, redundancy in routing paths and trust merging in secure servers allows us to feed good and bad reputation back to the network without being vulnerable to bad mouthing attacks.The trust data sent to the servers is enough to detect most, if not all, common attacks. However, it is not enough to find the concrete faulty or compromised node, and therefore the servers would not be able to confine the attack. The solution we propose is to include the routing path in some of the messages. This way, by analyzing the paths of messages with t-consistent and s-consistent data it is easy to discard well-behaved nodes. Note that routing paths coming from a compromised node could have been faked. The confinement agents act directly by decreasing the reputation values of the suspect nodes.A number of parameters see can be d3.6.In order to improve network scalability and throughput, we use a clustering technique based on Random Competition based Clustering (RCC) to constn nodes capable of transmitting at Wbits/s, according to [T, for each node under optimal conditions is:For a wireless network with rding to , the thrn and the number of clusters is m, the throughput in the lower level becomes:Thanks to the clustering approach, in a two-level mobile backbone network where the number of nodes is Node clustering, however, reduces redundancy and introduces single points of failure, as an intruder could control a whole zone by attacking its cluster head. The solution we propose is to introduce redundancy again. Every node in the network will have several cluster heads and will distribute messages randomly between them. This additional redundancy does not reduce the maximum throughput because at any given time the network structure is exactly the same as in the pure RCC scheme.Of course, no node will ever select an untrusted cluster head. On the other hand, the s-consistency check required by the mechanism of reputation sharing would not be feasible without the non-determinism in the message paths introduced by the routing protocol. Therefore, the trust framework and the routing protocol cooperate in order to minimize the threats.It may be argued that for every node to have two cluster heads, we need to double the backbone nodes so that there are twice as much backbone nodes in the coverage area. While it is true that more nodes have to belong to the backbone, this does not imply any reduction of the attainable throughput, as at any given time half the backbone nodes will not be used as such, and therefore the network structure remains exactly the same as in the pure RCC case. On the contrary, the burden of routing backbone messages is more distributed and therefore the penalty in energy consumption of being a cluster head is significantly reduced.4.Nodes of a sensor network need to access, store, manipulate and communicate information. In AmI, nodes make decisions based on received data. Therefore, the system must guarantee data reliability. Some applications will require the use of sensitive information. In that case, measures to ensure data confidentiality should be taken into account. In this section, we will analyze the different kinds of attack that a sensor network is exposed to. The next sections classify the different threats attending to their primary focus.4.1.Attacks on the confidentiality of communications.Attacks on the confidentiality of node information .Confidentiality attacks attempt to access to the information stored in the sensor network. They can be further classified attending to the target of the attack:Nodes have very limited resources.Potential intruders may physically access to them.Wireless communications.In a closed system with high-resources devices, information can be protected using cipher algorithm and physical access control. However, sensor networks are more vulnerable due to their characteristics:The network can use well-suited cipher algorithms to proviDue to the characteristics of the sensor nodes, it is not possible to secure its data against attacks. Even if we cipher the information in the devices, an attacker could use an approach based on logical and physical attacks that could break the ciphering. Since attackers have physical access to the nodes and nodes have limited resources, confidentiality should be based in the main characteristics of sensor networks: distribution and redundancy.In this kind of attack, the intruder accesses to the information stored in a sensor. If the attack succeeds, the attacker will obtain the information stored in it, but it is only raw data, not significant by itself. In addition, mapping that information with a concrete user is impossible because mapping information is stored in servers or distributed among a very large number of nodes. While the number of nodes holding some particular information remains much higher than the number of attacked nodes, attackers will not be able to obtain meaningful information.These agents do not store other information than the status of the physical device they control and the trust table for its routers.By attacking an aggregation agent or a node that runs an aggregation agent, an intruder may gain access to redundant local raw data, but anything else. Redundant data is useful to discard bad data, but it gives no extra information.They run in servers, which are not physically accessible, and have enough resources to keep the information secure.vi(t). By definition, that set will not represent any meaningful information, so the attack will fail.In this attack, an intruder listens to the channel trying to obtain some information. Due to sensor redundancy and information distribution, the attacker should break all communications between sensors and routers to obtain some significant information. The use of some ciphering algorithms will help protecting the system. Since the network is big enough, an attacker that listens to the channel will obtain only a set of d4.2.Jamming, collision and flooding: These attacks consist in interfering in communication by sending messages through several protocol layers. The immediate effect of these attacks is the loss of part of the messages from the nodes of the affected area. The affected area depends on the layer in which it occurs. The upper the attack occurs on the protocol stack, the more it spreads. So the scope of these attacks could be zone or global depending on their dimension and the layer where they occur . Wood and Stankovic [tankovic explain Neglect and greed: This simple form of DoS attack focus on a router vulnerability by arbitrarily ignoring all or some messages. It is especially dangerous in environments using hierarchical routes and static routing protocols. A possible solution would be a routing protocol with several paths available [vailable .Misdirection, blackholes and wormholes [ormholes : These aormholes , howeverA Denial of Service (DoS) attack is an attempt to interrupt, disrupt, or destroy services and operations in a system, which includes:Now, we will show how our system can detect and confine the denial of service attacks.Whether it is jamming, collision or flooding, the effects in the network are similar: loss of messages and node disappearance. The seriousness and extension of the attack depends on the number of nodes, the stack layer where it takes place and several other parameters. Nevertheless, it leads some nodes to disappear. As no new value from these nodes arrives to the routers, as trust tables are sent to the servers, the global trust service will soon discover that the latest values coming from these nodes are obsolete and it will mark them as lost.The detection of the attack can be performed when a group of nodes in the same area disappears suddenly. If a node with positive reputation disappears temporally its reputation will be decreased. This measure will also affect directly to the routers in the area. Therefore, a message will not be sent through an affected router, avoiding the zone.Flooding attacks could be more dangerous if messages are scattered and the whole network is affected. But if the reputation of a faked node is decreased, its forwarded messages will not be routed and, therefore, harm will not spread.A router may neglect to route all or some messages, but every node has two or more routers that are used randomly, and so eventually the messages will arrive to the destination.Some of the messages include their own route, and the servers analyze the routes of consistent messages to find out the routers which do not route properly. A feedback of negative reputation for these routers will cause messages to follow other routes avoiding these malicious routers.Local attacks can get worse if the compromised node stops routing properly, changes the values notified by some sensors, or teleports messages to other area of the network.A combined use of localization information (object tracking system), and route analysis for messages coming from the same area (redundancy in routing elements will ensure that not every message will go through the wormhole), allows to discover easily the bad routers. There are some proposals similar to this one, like in , 38 wherAgain, once the malicious routers have been detected, it is possible to confine the compromised nodes by decreasing their reputation. If a router has a low reputation it will be probably not chosen for routing messages. And redundancy in routing elements ensures that the new reputation table will eventually arrive to any node in the network.Trust tables going from the sensor nodes to the servers and reputation tables coming back from the servers can also be altered by a compromised node, but redundancy again allows discarding bad messages.4.3.Integrity attacks try to alter the normal behavior of the system by modifying the data stored in nodes. Although DoS attacks can be considered as integrity attacks as service interruption is one kind of bad behavior, we prefer to treat them separately because here the focus is on the data, instead of the communications.These attacks are very difficult to avoid due to the weakness of wireless nodes. But these are clear cases of local attacks. Local or node attacks are not relevant for our network model, since redundancy allows losing nodes without any impact in the behavior. Negative reputation can be used from the servers in order to confine these attacks. Even if integrity of individual nodes is difficult to achieve, the use of redundancy can reduce or eliminate the impact on the global system.4.4.Malicious nodes can pretend to be other nodes in order to implement one of the attacks mentioned above. We will consider four different types: clone, thief, mole and sybil.clone attack consists in duplicating an operating node. Both nodes, simultaneously, communicate with the same identity.The thief attack, a malicious node steals an operating node its identity and replaces it in the network. The malicious node stops original node's operation and takes advantage of its reputation and trust levels.In the mole is a malicious node that behaves as a well-operating node, with a fabricated identification, to achieve high levels of trust and reputation. Once inside, it can attack the system from a privileged position. A variation is the on-off attack, where the malicious node behaves well and badly alternatively, in order to maintain a high average level of trust.A sybil attack occurs when a malicious device presents multiple identities, as if it were multiple nodes, in order to control a substantial fraction of the system. This attack reduces the effect of the system's redundancy without the need of numerous physical nodes. The attacks can be performed at any layer of the protocol stack, but they are more profitable in the upper layers, like network or application.The The first three attacks are carried out by individual malicious nodes, and they can be considered special cases of the sybil attack. The sybil attack was first introduced in . NewsomeResource testing solutions assume that devices are limited in some resource . The solCryptography schemes base their efficiency in secure communications, and the different solutions differ in how to establish the keys: the key agreement process. They can have a key server with the public key of all nodes, and only establish a key through the key server. Another scheme uses the self-enforcing scheme approach, based on asymmetric cryptography with public key. Efficient implementations of Elliptic Curve Cryptography (ECC) Cipher Suites can be used in sensor networks to establish secure links, but it is not enough to avoid the sybil attack, because a malicious device may have more resources than the normal nodes. The third key agreement mechanism is key pre-distribution scheme \u201344. In tLocation based solutions , 46, cheClone, thief and mole attacks use only one identity, so their effect is the same as compromising a sole node. It is proved, as shown in previous sections, that the system adapts to individual attacks. If the node's behavior is consistent with the other nodes, the attack is undetectable, but the information obtained is not significant. In the clone attack the system can detect that the same identity is being used in two different locations, so the server would reduce the reputation of both nodes.On the other hand, the sybil attack can be dangerous to the system because it reduces the effect of the system's redundancy. Our architecture solves the sybil attack problem by reducing its attack rate. When an aggregation agent receives information from an unknown node, the trust level default value is zero. This is enough to send data from this node to the servers to collect behavior history, but not enough to be taken into account in any decision or aggregation. If the node behaves correctly, its reputation will grow eventually, but always at a controlled rate. If many sensors are appearing in a short time in the same area, the required time to have positive reputation will increase.5.The proposed architecture has been simulated extensively to evaluate its behaviour in presence of attacks of very different nature.rN.The most common attacks are detected and confined immediately with no other effect in the surroundings. Ill-behaved nodes will never get high reputation, but even for mole attacks, an attacker would need to add at least as many nodes as there are in the attacked area in order to have any influence in the decision. But even in that case, it would be easily detected by software agents analyzing the servers data. In our case, we use self-organizing maps (SOM) and genetic algorithms to detect anomalies in the system behavior. These agents can immediately confine the attack by changing the global reputation of the misbehaving nodes and correcting the affected neighbours'. As routes are non-deterministic, attacks to routing elements only delay the response time of the system by 1/One of the most significant results is the behavior of the system when a compromised node tries to impersonate many existing sensors (a sybil attack). It is noteworthy that our algorithm allows very fast confinement of the attack, by reducing immediately the reputation of the neighbor elements. In our system, trust information is not shared directly between the sensor nodes, it is sent to the reputation server. Therefore, our trust framework is not vulnerable to attacks based on an inconsistent behavior in the time domain , or the user domain (conflicting behavior attack). The attacked node can not influence directly on its neighbors unless there is a majority of badly-behaved nodes with high trust levels.The node reputation is the only trust-related information that is shared by the network. This reputation is elaborated by the reputation server, which has more information than any individual node, and also has more resources to avoid local attacks. Attacks to the reputation messages, or even the generation of these messages, is avoided by the multiple routing paths, and because no information is trusted unless it is t-consistent and s-consistent.6.Wireless Sensor Networks are based on many wireless, low-cost, low-power, and low-resources nodes. These characteristics and the possibility to access physically to the nodes make them highly vulnerable to attacks. Cryptography appears as clearly insufficient to maintain data confidentiality and integrity in the network.We have proposed a holistic solution that assumes this node vulnerability to address security issues in an intelligent ambient based on massive wireless sensor networks.Redundancy and fast continuous adaptation have been identified as the key weapons to defend the system against attacks, and they are used consistently to cope with security issues at different levels.The proposed architecture is based on an agent system with supporting services. Data flows from the sensors to the servers, where it is processed returning relevant semantic enhancements back to the environment. Agents running in insecure wireless nodes never hold a significant information unit, what preserves global confidentiality, and decisions are made in servers, what preserves integrity if redundancy is used adequately.Most attacks are detected by the analysis of the redundant data available locally in every routing element and globally collected in the servers. Decisions at different levels are supported by a trust-based framework where trust data only flows from the sensors to the servers and reputation only from the servers to the sensors. Non-deterministic routes allows to detect and confine misbehaving routers.The resulting approach takes into account practical issues, such as resource limitation, bandwidth optimization, and scalability. Based on these results we claim that our approach provides a practical solution for developing secure applications on top of wireless sensor networks."} +{"text": "The reliable operation of modern infrastructures depends on computerized systems and Supervisory Control and Data Acquisition (SCADA) systems, which are also based on the data obtained from sensor networks. The inherent limitations of the sensor devices make them extremely vulnerable to cyberwarfare/cyberterrorism attacks. In this paper, we propose a reputation system enhanced with distributed agents, based on unsupervised learning algorithms (self-organizing maps), in order to achieve fault tolerance and enhanced resistance to previously unknown attacks. This approach has been extensively simulated and compared with previous proposals. The use of current SCADA (Supervisory Control And Data Acquisition) systems has already come into question as they are increasingly seen as extremely vulnerable to cyberwarfare/cyberterrorism attacks , 2.In particular, security researchers are concerned about: (1) the lack of concern about security and authentication in the design, deployment and operation of existing SCADA networks; (2) the mistaken belief that SCADA systems have the benefit of security through obscurity through the use of specialized protocols and proprietary interfaces; (3) the mistaken belief that SCADA networks are secure because they are purportedly physically secured; and (4) the mistaken belief that SCADA networks are secure because they are supposedly disconnected from the Internet.The security of these SCADA systems is important because compromise or destruction of these systems would impact multiple areas of society. For example, a blackout caused by a compromised electrical SCADA system would cause financial losses to all the customers that received electricity from that source.There are two distinct threats to a modern SCADA system. The first is the threat of unauthorized access to the control software, be it human access or changes induced intentionally or accidentally by virus infections and other software threats residing on the control host machine. The second is the threat of packet access to the network segments hosting SCADA devices. In many cases, there is rudimentary or no security on the actual packet control protocol, so anyone who can send packets to the SCADA device can control it. In many cases SCADA users assume that a VPN is of sufficient protection and are unaware that physical access to SCADA-related network jacks and switches provides the ability to totally bypass all security on the control software and fully control those SCADA networks. These kinds of physical access attacks bypass firewall and VPN security and are best addressed by endpoint-to-endpoint authentication and authorization commonly provided in the non-SCADA world by in-device SSL or other cryptographic techniques. But encryption is not enough for sensor devices because resource restrictions would prevent strong encryption schemes, and low-cost side-channel attacks would stThe increased interest in SCADA vulnerabilities has resulted in vulnerability researchers discovering vulnerabilities in commercial SCADA software and more general offensive SCADA techniques presented to the general security community , 5. In eBut most of the risks come from the limitations of the sensor nodes: (1) many nodes in the network have very limited resources; (2) pervasiveness implies that some nodes will be in non-controlled areas and are accessible to potential intruders; (3) all these sensor nodes and controlling computers are globally interconnected, allowing attacks to be propagated step by step from the more resource-constrained devices to the more secure servers with lots of private data.Ciphers and countermeasures often imply a need for more resources , but usually this is not affordable for this kind of applications. Even if we impose strong requirements for any individual node to be connected to our network, it is virtually impossible to update hardware and software whenever a security flaw is found. The need to consider security as a new dimension during the whole design process of embedded systems has already been stressed , 6, and Applications built on sensor networks\u2014SCADA systems being no exception\u2014have to live with the fact that privacy and integrity cannot be preserved in every node of the network. This poses restrictions on the information a single node can manage, and also in the way the applications are designed and distributed in the network.Of course, the inherent insecurity of embedded systems should not prevent us from striving to avoid compromises. We should guarantee that a massive attack can not be fast enough to escape the detection, isolation, and recovery measures. Therefore we should design the nodes as secure as the available resources would allow.In spite of the disadvantages of sensor networks from the security point of view, they provide one important advantage for fighting against attacks: redundancy. A sensor network usually has a high degree of spatial redundancy (many sensors that should provide coherent data), and temporal redundancy , and both can be used to detect and isolate faulty or compromised nodes in a very effective manner.In previous work , we propIn Section 2. we review some of the most relevant previous approaches. Section 3. describes our approach in detail. In Section 4., some experimental data and algorithms description is shown and discussed. Finally, in section 5., we draw some conclusions.2.The problem of security in sensor networks has been widely dealt with by researchers. The classic approach to security in these networks consists in adding an authentication system and encrypting the communications. However, in our opinion this approach cannot be considered secure. Almost every node in sensor networks has very limited resource, so the authentication or encryption algorithms that it uses cannot be complex. Another issue to consider is that updating these algorithms is very difficult in case security failures arise. Finally, nodes in these networks are usually within reach of the attacker, so a large number of side channel attacks can be carried out , 11 to oet al. , y \u2190 Y[j]\u2003\u2003if((y = NIL)\u2016(word[x] = word[y])) :\u2003\u2003s \u2190 s + d\u2003\u2003\u2003i \u2190 (i + 1)\u2003\u2003\u2003elseif((x = NIL)or(word[x] > word[y])) :\u2003\u2003s \u2190 )\u2003\u2003\u2003j \u2190 (j + 1)\u2003\u2003\u2003else :\u2003\u2003s \u2190 )\u2003\u2003\u2003i \u2190 (i + 1), j \u2190 (j + 1)\u2003\u2003\u2003returns\u20033.4.As mentioned before, we treat attacks as data outliers. There are two possible approaches for detecting outliers using SOM algorithm depending on the following two possibilities: detecting outlying nodes or detecting outlying data that belong to non-outlying nodes. For the first case, we calculate the average distance of each node to the rest of the nodes (or its closest neighborhood) (MD). The nodes whose MD values are significantly bigger than the rest are declared to be outlying nodes. In the later case, we calculate quantization error (QE) of each input as the distance from its group centre.i.e., from the medium QE of the node established during the training it is considered to be the proof of the anomaly of the current input.Hence, a node whose average distance is greater than those of the rest of the nodes is considered to be outlying node and all the inputs that belong to it are considered to be anomalies. On the other hand, even if the node to which the current input belongs is not outlying, e.g., outlying data is too sparse for the formation of outlying node(s) to be possible, if its QE value is greater than the rest of the QE values from the same node, intrusiveness as the distance from the normal one. But, this approach has many flaws, as it can happen that a new intrusion be more similar to the normal data than any of the intrusive ones, so it would falsely be labelled as normal, while our QE measurement is more probable to work in this case.There are many important advantages that this proposal offers. On one hand, we avoid time consuming and error prone process of collecting only normal data for SOM training if we want to establish a normal behavior and detect attacks as deviations from the established normal behavior. On the other hand, we could train SOM with all the collected data and then label the resulting clusters in a certain way. For example, we can say that the cluster to which the most of the data belongs is the \u201cnormal\u201d one, while the rest are intrusive, or measure its Every node is being examined by an agent that resides on a node in its vicinity and listens to its communication in a promiscuous manner, where the agent executes the SOM algorithm in order to detect traces of an attack. The system of SOM agents is coupled with a reputation system where each node has its reputation value that basically reflects the level of confidence that others have in it based on its previous behavior. In our proposal, the output of the SOM agent affects on the reputation system in the way that it assigns lower reputation to the nodes where it detects adversarial activities and vice versa. We further advocate coupling the information provided by the reputation system with routing protocol in the way that the nodes with low reputation should not be considered as a routing hop and all the information coming from these nodes should be discarded. In this way, the compromised node remains isolated from the network and has no role in its further functioning. Considering that the attacker that has taken over a node can disable or compromise the SOM agent, we introduce agent redundancy: at least two SOM agents will examine the behavior of each node and both will affect on its reputation.This approach has many advantages. First of all, SOM algorithm does not use reputation values from the neighborhood, which makes it robust to badmouthing attack, the main problem of reputation systems. Further, it can make the model of data for training and testing on the fly, so it is not necessary to provide the storage for great amounts of data.3.5.We limit the reputation values to the range , where 0 is the lowest possible, meaning that there is no confidence in the node, and 1 the highest possible, meaning the absolute confidence in the node.repQE and repMD based on previously defined QE and MD values:valuemaxMD is the maximum median distance for the current lattice and anoScMed is the MD value for the best matching unit of the current input. In this way, repMD takes values between 0 and 1, where the nodes that are close to the rest have higher reputation and vice versa.We define two reputation values, Based on the previous definitions of anomaly index, we define the reputation of SOM in the following way:QE value, during the training we calculate the median QE for all the nodes in the corresponding SOM lattice. In the testing process, we calculate QE value for the corresponding input and calculate repQE as the ratio of current QE and the median QE for its corresponding best matching unit node . Finally, according to the intuitive reasoning , we establish the following manner to calculate current reputation:Regarding If (repMD < 0.5) :\u2003rep = repMD ;Else :\u2003rep = repQE ;where we take 0.5 as threshold because it is the median value between 0 and 1.cumQE stands for cumulative QE. If the final value is greater than 1, we truncate it to 1, and in a similar fashion if it is lower than 0, we truncate it to 0. The function x + log(0.99x) is presented in Finally, we update the reputation of the node in the following way:4.The proposed architecture has been simulated extensively to evaluate its behavior in the presence of attacks of very different nature. Several algorithms have been implemented and used in the reputation server to calculate the reputation of the sensors to make a comparison of its performance: linear algorithm, beta function algorithm and SOM.Detection time. It is the elapsed time since the attack was started until it is detected, i.e., the reputation of ill-behaved nodes starts decreasing.Isolation time. It is the elapsed time since the attack is detected until the reputation of every attacker node gets below a threshold. The nodes with reputations below this threshold are not considered for decision making.Isolation capacity. It is the portion of ill-behaved nodes that are detected as attackers.System degradation. It is the portion of well-behaved nodes detected as attackers.In a simpleOne of the most representative identity attacks is the sybil attack, due to its aggressiveness and elusiveness. Besides, it includes most of the identity attacks by changing the behavior of the attack.These results have been obtained by simulating a scenario of 2,000 nodes while a sybil node that attacks with 800 identifiers. The sybil node is located in the center of the scenario while the reputation server is located at the origin. The system is working normally until the 1000th iteration, when the sybil attack is launched.detection time is extremely similar in the three algorithms since the attacks are detected almost immediately.The experiments have revealed that the With the linear algorithms , the repIt is noteworthy that the SOM algorithm allows very fast confinement of the attack while not affecting so much to the nodes being impersonated (thick dark stripe) and the attacker neighbors (the thin stripe is thinner).isolation time and, after that period, it is confined. The height of this curve indicates the depth of the attack in the system.capacity of confinement can be measured by the number of false positives at the stationary period, that is zero in every case.The number of false positives is calculated, we can determine the impact of the attack because this magnitude takes into account both aspects of the attack: its duration and its depth. n). The Y-axis indicates the impact (I), calculated as the sum of the false positives given by the sT is the simulation time and fptP is the percentage of false positives at the instant t.If the area under the curve n). The Y-axis indicates the value of the system degradation (D), given by the attT sN is the number of ill-behaved nodes at the end of the simulation (sT) and fpT sN is the number of well-behaved nodes considered as ill-behaved ones. The system degradation is optimal when its value is 1.As we can see in Since node trusts have no influence in the SOM algorithm, it should be more resistant to reputation attacks like badmouthing. This experiment proves that it is fulfilled. The experiment entails a badmouthing sybil node that attacks the system. The attack decreases the reputation of a group of nodes surrounding the attacker. When their reputation is low enough, the information from sybil identifiers will be used for the decisions. The trust of the neighbors about the attackers will not be taken into account due to their low reputation and, hence, the sybil will not be confined . HoweverThe scenario of the experiment has consisted of 180 badmouthing nodes in a small region and, after a while, these nodes become into a sybil node that adopt the identities of the badmouthing nodes.5.Many critical infrastructures are monitored with SCADA systems that process data obtained with a heterogeneous sensor network. SCADA sensor networks are usually composed by many embedded systems with severe resource limitations, joined to the possibility to access physically to the nodes, what make them highly vulnerable to cyberwarfare/cyberterrorism attacks. Cryptography appears as clearly insufficient to maintain data confidentiality and integrity in the network.We have proposed a holistic solution that assumes this node vulnerability to address security issues in sensor networks, by exploiting redundancy at different levels.The proposed architecture is based on a reputation system that supports decisions at different levels. It is a trust-based framework where trust data only flows from the sensors to the servers and reputation only from the servers to the sensors. This reputation is also affected by independent agents, using unsupervised learning algorithms, with broader view of the global network. We have demonstrated the effectiveness of this approach with the implementation of anomaly detectors based on self-organizing maps and immune systems.We have compared the behavior in presence of common attacks with more traditional reputation algorithms, resulting in similar detection time, similar isolation capacity, faster confinement, and highly reduced attack impact even for low redundancy in the sensor network. More importantly, this is done with no previous knowledge about the attacks being performed, and it is more resistant to attack variations, as it is shown in the badmouthing-sybil combined attack.The resulting approach takes into account practical issues, such as resource limitation, bandwidth optimization and scalability, and it is also well-suited in scenarios with low redundancy. Based on these results we claim that our approach provides a practical solution for developing more secure SCADA applications."} +{"text": "Roseofilum). Here, we optimise existing protocols for the isolation and cultivation of Roseofilum cyanobacteria using a new strain from the central Great Barrier Reef. We demonstrate that the isolation of this bacterium via inoculation onto agar plates was highly effective with a low percentage agar of 0.6% and that growth monitoring was most sensitive with fluorescence measurements of chlorophyll-a (440/685 nm). Cell growth curves in liquid and solid media were generated for the first time for this cyanobacterium and showed best growth rates for the previously untested L1-medium . Our results suggest that the trace metals contained in L1-medium maximise biomass increase over time for this cyanobacterium. Since the newly isolated Roseofilum strain is genetically closest to Pseudoscillatoria coralii, but in terms of pigmentation and cell size closer to Roseofilumreptotaenium, we formally merge the two species into a single taxon by providing an emended species description, Roseofilum reptotaenium (Rasoulouniriana) Casamatta emend. Following this optimized protocol is recommended for fast isolation and cultivation of Roseofilum cyanobacteria, for growth curve generation in strain comparisons and for maximisation of biomass in genetic studies.Black band disease (BBD) is a common disease of reef-building corals with a worldwide distribution that causes tissue loss at a rate of up to 3 cm/day. Critical for a mechanistic understanding of the disease\u2019s aetiology is the cultivation of its proposed pathogen, filamentous cyanobacteria (genus Coral diseases contribute to coral mortality and to the decline of reefs worldwide , 2015. ODesulfovibrio bacteria, Cytophaga, Alphaproteobacteria and a range of other heterotrophic microbes were collected in 3 m seawater depth at Orpheus Island (S 18-34.609/E 146-29.793) in June 2013 (GBRMPA permit G14/36788.1), transported to the Australian institute of Marine Science and maintained in outdoor aquarium systems at 27 \u00b0C with shaded, natural sunlight and flow through seawater supply. The isolation of the BBD associated cyanobacteria started during the days immediately following collection.Black band diseased coral colonies seawater by pipetting the slurry up and down with a sterile 1 mL plastic transfer pipette and centrifuged at 3,000 g for 3 min to select and clean the BBD cyanobacteria. The supernatant, containing the majority of other mat associated bacteria was discharged and the cyanobacterial pellet resuspended in autoclaved seawater. The cyanobacterial pellet was inoculated onto an agar plate to clean cyanobacterial filaments from other contaminating microbes and incubated under sideway unidirectional light (50\u201380 \u00b5E m\u22122 s\u22121 light intensity) for 6 h at 30 \u00b0C (2) with a sterile scalpel blade in a biosafety cabinet before being transferred to a fresh, solid agar plate. This cleaning step was repeated twice under the previously described incubation conditions. Subsequently, a liquid culture was established by transferring an approx. 2 cm2 agar piece containing a high density of cyanobacteria into freshly prepared medium. After genetic identification of the culture (details below), single cyanobacterial filaments were selected for the establishment of a monoculture as follows: (a) from liquid medium under an inverted microscope and with a micro-pipette; (b) from agar with a stereo-microscope and sterile scalpel. These cultures were grown in liquid L1 medium under a 12 h light and dark cycle at 30 \u00b0C with 50\u201380 \u00b5E m\u22122 s\u22121 light intensity and subcultured if required.The motile, BBD associated cyanobacteria of the clade seofilum , 2012 weat 30 \u00b0C , 2010. Cntensity , 2010. Twww.ncbi.nlm.nih.gov). Sequence variations and close association with other cyanobacteria of the clade Roseofilum were visualised in a maximum likelihood tree, generated in MEGA5 according to the manufacturers recommendations. The V1\u2013V9 region of the 16S rRNA marker gene was amplified with the primers 27f and 1492r , 1991 inin MEGA5 , 2011 wiApproximately 10 mg of exponentially growing cyanobacteria filaments were taken from a liquid culture and centrifuged at 10,000 g for 5 min at 4 \u00b0C to pellet cells (Eppendorf Centrifuge 5430R). The pellet was subsequently resuspended in 1 mL phosphate buffer (0.1 M) and disrupted in freeze and thaw cycles. Cell debris was pelleted at 10,000 g for 10 min at 4 \u00b0C and the supernatant (filtered 0.45 \u00b5m) analysed for phycobiliprotein absorbance spectra on a spectrophotometer , 2012.2 by averaging filament counts along six radial, equally spaced line transects and extrapolating the numbers to the overall petri dish area.Cyanobacterial growth on solid medium was compared among three agar concentrations . In brief, 250 mL of fresh seawater was mixed with bacteriological agar , autoclaved in a 500 mL Schott bottle, cooled to approx. 40 \u00b0C and enriched with L1-medium , via time series measurements in 24-well plates (2 mL in each well) using three approaches: (1) optical density (OD) at 750 nm as a pigment independent measurement . Cyanobacteria at different growth stages were pelleted at 5,000 g for 5 min, dried overnight at 60 \u00b0C and weighed to establish the correlation of \u201cbiomass\u2014OD 750\u201d and \u201cbiomass\u2014fluorescence.\u201dPercent coverage values were calculated from images taken of the well bottom with an inverted microscope at standardised settings . The pixel count of cyanobacterial filaments was averaged from five images per well and expressed as percent coverage. Although cyanobacterial filaments were growing on the well bottom and in suspension, the coverage of the well bottom was taken as a proxy for the overall growth.Cyanobacterial growth was compared among four different media for culture optimisation: ASNIII, L1, F/2, and IMK (detailed recipes in \u22122 s\u22121 (PAR) with shaking at 30 rpm. Cyanobacterial growth curves were assessed by conversion of fluorescence measurements over time (440/685 nm) into dry weight and by calculating growth rates k = log(10)Xi \u2212 X0\u2215log(10)2\u2217t as well as doubling times tgen = 1\u2215k for exponential phases into four equal parts and pelleting the filaments at 3,000 g for 3 min . The supernatant was discarded and each pellet was resuspended in 10 mL of freshly prepared growth medium: ASNIII; L1; F/2; IMK, respectively. The fresh cyanobacterium stocks were distributed randomly into two 24-well plates and incubated at 30 \u00b0C in a 12 h light cycle at 50\u201380 \u00b5E ml phases , 1990.Differences in growth curves of cyanobacteria on agar and in liquid media were statistically analysed by comparing regression slopes from log phases with a one-way analysis of variance (ANOVA) and a Tukey post-hoc comparison, all assumptions met and S5.Roseofilum cyanobacterium via phototaxis on agar, and provide a cultivation method which results in healthy, fast growing and viable filaments to the publicly available BBD-associated ommunity , 2012. HPseudoscillatoria coraliiRoseofilum reptotaeniumP. coralii and R. reptotaenium share >97% of their 16S rRNA gene sequence, they were not considered the same species. Both taxa were maintained in Roseofilum because of differences in trichome dimensions and associated pigments = 1.45; The newly isolated cyanobacterium of the present study showed characteristics of both otaenium . AssociaRoseofilum species do not reliably separate P. coralii, R. reptotaenium and the cyanobacterium of the present study. The phenotype of strain BgP10_4S is clearly an exception compared to the other reported Roseofilum species .Our results show, that the used characteristics that were used to distinguish the BBD-associated species . However strains , 2015. A strains , 2013 anP. coralii, R. reptotaenium and the cyanobacterium of the present study, the most parsimonious solution is the integration of the species into a single taxon, as already practised in R. reptotaenium below and unite the two taxa according to the principle of priority , into Roseofilum reptotaenium (Rasoulouniriana) ex Casamatta, while Pseudoscillatoria coralii nom. inval. Rasoulouniriana becomes its synonym. As a consequence, the newly isolated cyanobacterium of the present study has been classified as Roseofilum reptotaenium strain AO1 , 2011b a C-53584 , 2012.w\u2215v). For further details access full formal description of the genus Roseofilum and the species R. reptotaenium in Gram-negative, motile cyanobacterium growing epizoic on corals in black, microbial mats that move over the coral surface and kill the underlying tissue . In culture, filaments appear can appear dark-green to blackish-brown and reach up to 1 mm in length. Unbranched trichomes with thin sheath, no heterocysts, tapered cells tips, cells of of 3.0\u20134.5 \u00b5m length. High levels of phenotypic plasticity with variants in terms of cell width and pigmentation ranging via phototaxis on an agar surface towards a unidirectional light source . Cyanobacteria on 0.6% agar spread within the entire agar plate with three times as many filaments after 7 days compared to the next successful treatment of 1% . Based on top 5 blast hits (97\u201399% identity), cyanobacteria species 1 (KU720412) was close related to Leptolyngbya sp. (KJ206339.1), Oscillatoria limnetica (AF410934.1) and Phormidium sp. (JF837333.1), while the second cyanobacteria species (KU720413) was closest to Limnothrix sp. (DQ889938.1). These two species were only able to move through a 0.6% agar and could be potentially missed during the isolation process if a higher percentage agar was used.Two additional cyanobacteria have been isolated with a low percentage agar of 0.6% and deposited into the sequence reference database Genbank the National Center for Biotechnology Information and well coverage measurements (%) allowed the conversion of measured fluorescence values into biomass for the calculation of growth rates (k) and doubling times (tgen) for subsequent experiments and comparison with other strains.To date, growth of BBD associated cyanobacteria has only been qualitatively assessed cultures , 2010, ocultures , 2009. Tcultures , 2014, wcultures , and couents (%) . All meaents (%) . The linents (%) . However, clumped filaments were capable of homogenising overnight once returned to the constant incubator environment. In addition, previously reported ring formations and continuous clumping behaviour , 2014 weRoseofilum clade . Filaments in L1 medium reached twice the amount of biomass after 20 days compared to the ASNIII cultures, if L1 nutrients were re-supplied by inoculation of N, P, trace metals and vitamins with a final \u00d7 1 concentration every 3 days . Interestingly, the only difference between the chemical components of L1-medium (best growth) and F/2-medium was the presence of selenous acid (H2SeO3), nickel (II) sulfate hexahydrate (NiSO4\u22c56H2O), sodium orthovanadate (Na3VO4) and potassium chromate (K2CrO4) in the former. Due to the differences in growth of cyanobacteria in L1 and F/2 media, it is likely that the presence of one or more of these trace metals is essential for maximising the growth potential of R. reptotaenium AO1. Since these trace metals are not present in ASNIII medium Casamatta emend. Healthy, fast growing and viable R. reptotaenium AO1 cultures were established on a low percentage 0.6% L1 agar (by transferring a dense cyanobacteria agar pellet onto a new plate every 7\u201310 days) and in L1 liquid medium . The species isolation with a low percentage agar (0.6%) resulted in faster and easier gliding of cyanobacteria filaments and enabled us to recover two additional cyanobacteria species from BBD samples.We present an optimised cultivation protocol for the main BBD Roseofilum filaments in smaller volumes of <5 ml, if undisturbed, allowed the generation of growth curves for the first time for black band disease associated cyanobacteria. Our media comparison showed, that the commonly used growth medium ASNIII did not result in optimal growth conditions while L1 maximised biomass for the tested Roseofilum species. Maximising biomass of the cultured cyanobacteria is essential for any downstream genomics, infection experiments, and other culture-based experiments that require replication and a large amount of biomass. Therefore, a standardised culturing method, such as the one provided here, can be critical for ensuring reliable comparisons of morphological, genomic and physiological differences among the isolated black band disease Roseofilum cyanobacterial strains.The homogeneous growth of 10.7717/peerj.2110/supp-1Supplemental Information 1Click here for additional data file."} +{"text": "A framework was developed to estimate MBNL concentration usingsplicing responses alone, validated in the cell-based model, and applied tomyotonic dystrophy patient muscle. Using this framework, we evaluated theability of individual and combinations of splicing events to predict functionalMBNL concentration in human biopsies, as well as their performance as biomarkersto assay mild, moderate, and severe cases of DM.Alternative splicing is a regulated process that results in expression ofspecific mRNA and protein isoforms. Alternative splicing factors determine therelative abundance of each isoform. Here we focus on MBNL1, a splicing factormisregulated in the disease myotonic dystrophy. By altering the concentration ofMBNL1 in cells across a broad dynamic range, we show that different splicingevents require different amounts of MBNL1 for half-maximal response, and respondmore or less steeply to MBNL1. Motifs around MBNL1 exon 5 were studied to assesshow Our studies provide insight into the mechanisms of myotonic dystrophy, the mostcommon adult form of muscular dystrophy. In this disease, a family of RNAbinding proteins is sequestered by toxic RNA, which leads to mis-regulation anddisease symptoms. We have created a cellular model with one of these familymembers to study how these RNA binding proteins function in the absence of thetoxic RNA. In parallel, we analyzed transcriptomic data from over 50 individuals(44 affected by myotonic dystrophy) with a range of disease severity. Theresults from the transcriptomic data provide a rational approach to selectbiomarkers for clinical research and therapeutic trials. Alternative splicing increases the coding potential of a gene and importantly, allowsfor regulation of expression of specific isoforms in a developmental andtissue-specific manner. Regulation of alternative splicing is integral for a varietyof biological processes including erythropoiesis, neuronal differentiation, andembryonic stem cell programming ,2.MisreTo address these questions, we focused on alternative splicing regulation by MBNL1,an RNA binding protein involved in muscle, heart, and CNS development ,9. In thTunable systems can be used to control expression of specific genes and they can beused to produce a range of mRNA and protein isoforms, and phenotypes that changegradually or sharply, in response to stimuli . Here, wA tetracycline-inducible Flp-In T-REx system (Invitrogen) was utilized to expressHA-tagged MBNL1 in HEK293 cells , to alloExpression of MBNL1 was achieved at lower concentrations of doxycycline (dox)than typically used for tet-on experiments (> 5 ng/ml). A sigmoidal-shapedMBNL1 concentration curve was observed when steady-state MBNL1 levels wereplotted against the log of the dox concentration . MaximalA schematic representation of a typical dose-response curve used to study ahypothetical MBNL1 regulated cassette exon, where MBNL1 promotes skipping, isshown in MBNL1 exon 5, MBNL2 exon 5,ATP2A1 exon 22, FN1 exon 25,INSR exon 10, and NFIX exon 7 ). ). MBNL2,of YGCYs relativetrans-factor environment and organization of othercis-elements. However, by studying the splicing behavior ofsequence variants of a single event, we could limit the impact of thesevariables. We mutated cis-elements in the intron upstream ofMBNL1 exon 5 to evaluate how putative MBNL binding sitesaffect dose-response behavior altered YGCY organization and spacing of splicing signals, including a distantbranch-site and the 3\u2019 splice site . Dox wascis-elementorganization of this region, in particular the sequence around the central YGCY,may mediate specific dose-response characteristics. Most alterations to bindingmotifs in this sequence space led to changes in dose-dependent behavior,including reduced slope and increased EC50, potentially through reducingcooperativity or changing RNA structure.In contrast, del4, a deletion mutant lacking the YGCY 3' of del3, exhibited nosignificant changes in dose-dependency parameters. Another mutant, 4M, in whichthe del4 YGCY was mutated, also exhibited dose-dependent behavior similar tothat of WT . These rcis-element organization plays an important role indictating the shape of each dose-response curve, thetrans-factor environment also likely plays a role, andtherefore dose curve parameters will vary across tissues.In the HEK293 system, we observed that \u03a8 of each splicing event exhibits acharacteristic sigmoid shape with respect to MBNL1 concentration. Thiswell-controlled system allowed us to derive these relationships, and directlymeasure functional MBNL1 levels by Western blot. However, a major goal in the DMfield is to estimate the functional, non-sequestered concentration of MBNL intissue of DM patients. This metric is impossible to obtain from tissue usingcurrent technologies, as free versus sequestered pools of MBNL are dynamic.However, the dose-dependent curves we characterized suggested that \u03a8 could beused to infer the concentration of functional MBNL in cells. While we observedthat min,\u03a8max, EC50, and slope, as a Bayesian estimation problem. Since wecan compute the likelihood of observing \u03a8 for all seven splicing events fromHEK293, given any set of values for [MBNL], \u03a8min, \u03a8max,EC50, and slope for each splicing event, Bayes\u2019 Rule allows us to invert theproblem to obtain the posterior probability distribution of each of thoseparameters, including the underlying MBNL concentration. Indeed, when estimatedusing this approach, inferred [MBNL] correlated extremely well(R2 = 0.993) with measured MBNL levels relativeto GAPDH, as assessed by Western blot and 11 healthy controls (sample cohortdescribed in ). MISOw|> 0.2) Tables. LR mouse model of DM1 for a new cohort of patients, we divided our samplesinto two groups to perform traditional cross-validation. We used 70% of theindividuals to estimate \u03a8min, \u03a8max, EC50, and slope forevery splicing event (training); these trained parameters could be used to plotsigmoid curves for each event (NFIX and CLASP1shown in p([MBNL] | \u03a8) p(\u03a8 |[MBNL]).We sought to assess the suitability of each splicing event as a potentialbiomarker for levels of functional, non-sequestered MBNL, a key metric likelycorrelated to clinical outcomes in DM1. To simulate a hypothetical futureclinical trial scenario in which we have estimated \u03a8NFIX or CLASP1 are displayed in blue,green, and orange shading, for \u03a8 values observed in 3 distinct biopsy samples; the posterior probability estimate at the \u201ctrue\u201d [MBNL]value is close to 7 . Analyses of sigmoid curves for the bestperformers , moderate (0.33< [MBNL] < 0.66), and mild ([MBNL] > 0.66) DM1, as well as across theentire patient cohort. Splicing events best suited to predict [MBNL] in mild DM1are distinct from those best suited to predict moderate or severe DM1 . Interesrformers indicateCLASP1 but not NFIX for which that biomarker exhibits the greatest predictivepower . This haCACNA1S . Interesibialis. .min,\u03a8max, EC50, and slope values for each splicing event in bothHEK293 cells and human tibialis, and also observed that these parameters differbetween HEK293 cells and human tibialis. These observations suggest that properselection of splicing biomarkers for a given cell type requires characterizationof biomarkers in that tissue, or a basic understanding of how \u03a8 is modulated bythe interaction of multiple trans-factors with pre-mRNAcis-elements.The relationship between MBNL1 levels and \u03a8 was previously investigated inmyoblasts and mouse muscle . \u03a8 for fThe full length MBNL1 (isoform 41) with an N-terminal HA tag was cloned into thesupplied vector (pcDNA5) and transfected into the HEK293 T-REx FLP cell line(Life Technologies) to create the inducible line following the manufacturer'sprotocol.HEK cell pellets were lysed in (RIPA) buffer supplemented with 1x protease inhibitor cocktail bylight agitation for 20 min via vortex, and the concentration of protein wasnormalized using bicinchoninic acid (BCA) reagent (Pierce) prior to resolutionon 10% SDS-PAGE gels. MBNL1 proteins were probed with antibody (MB1a (4A8)) probed aMbnl1 deletion constructs are described previously 47]. 1 ughg19 humanreference genome with GSNAP and allowing for novel splicing [http://research-pub.gene.com/gmap/). GSNAPwas run with the following options set:\u2013s [splice sites mapfile]\u2013N 1 \u2013A sam\u2013o FR\u2014pairexpect 300\u2014pairdev 100. A splice sitesmap file was generated from the hg19 gene models (GrCh37release 75). Isoform abundances were estimated and each of the 44 DM1 sampleswas compared to each of the 11 control samples using MISO , and a parameter\u03c3. We assume that observed \u03a8 values are drawn from a normal distributioncentered around the modeled \u03a8 value, with standard deviation \u03c3. Priors for eachestimated parameter were as follows: \u03a8min ~ Uniform,\u03a8max ~ Uniform, log(EC50) ~ Normal, slope ~Normal, [MBNL] ~ Uniform, \u03c3 ~ Uniform. The pythonpackage PyMC3 was used to implement Bayesian inference , and \u03c3 were estimated using 70% of the samples, as describedabove. Then, for each splicing event and each sample in the remaining 30% ofsamples, we calculated a posterior distribution for [MBNL]. We essentiallyperformed Bayesian inference again, framing the problem as computing theprobability of [MBNL] across all possible values of [MBNL]. That is, we computedp\u221dp* p. Here,\u03c9 describes \u03a8min, \u03a8max, log(EC50),slope and \u03c3, a parameter describing the standard deviation of the normaldistribution from which observed \u03a8 values are drawn from the modeled \u03a8 value(similar to above). In this case, however, \u03c3 is directly computed from thetraining data, so that events with observed \u03a8 values that closely matchpredicted \u03a8 are more highly favored as biomarkers. MBNL] > 0.66,0.33 <[MBNL] < 0.66, and [MBNL] < 0.33Click here for additional data file.S2 Fig200 nucleotides upstream and downstream of the regulated exon are depictedwith YGCY motifs marked. Schematic element spacing is drawn to scale.(TIF)Click here for additional data file.S3 FigMBNL1 mini-gene reporter in triplicate. Experimentaldetails are the same as in MBNL1 intron4 is generally unstructured. Intron 4 mapped with previously determinedstructural data (rightpanel). The x-axes are log10 scale. Individual points and errorfor each event is shown is (A) \u03a8 was plotted against [MBNL1] as determined by western blot in HEK293 for(TIF)Click here for additional data file.S6 Fig\u03a8 estimates are plotted against the inferred MBNL1 for each splicing event.Inferred dose-response curve are shown.(PDF)Click here for additional data file.S7 FigR2 = 0.0841). (B) The maximalisometric force of ankle dorsiflexion (ADF), measurement moderately correlates with inferred [MBNL](R2 = 0.358).(A) CTG repeat length was not correlated with inferred [MBNL] in the tibialismuscle samples ((TIF)Click here for additional data file.S8 Fig200 nucleotides upstream and downstream of the regulated exon are depictedwith YGCY motifs marker. Schematic element spacing is drawn to scale.(TIF)Click here for additional data file.S1 TableSplicing event coordinates from MISO from hg19, gene symbol, mean \u03a8 forcontrol and DM1 samples, standard deviation, mean \u0394\u03a8, number of samples usedto calculate the mean for control and DM1, number of samples for each eventwith a Bayes Factor greater than 5, and % patients with dysregulatedsplicing for each event are shown in the table.(XLSX)Click here for additional data file.S2 Table\u03a8 estimates from MISO for genes (indicated by gene symbol and hg19coordinates) are shown for all samples in this study. NA is used for samplesthat did not have sufficient coverage to obtain an estimate for thatsplicing event.(XLSX)Click here for additional data file.S3 TableBayesian posterior mean estimates for parameters min \u03a8, max \u03a8, EC50, slope,respective 5% and 95% confidence intervals, and \u03c3 are shown for eachsplicing event (event_parameters tab). Inferred [MBNL], 5% and 95%confidence intervals, and mean delta psi are shown for each sample(MBNL_inferred tab). Biomarker predictive power for all events and opti(XLSX)Click here for additional data file."} +{"text": "Development of the foetal period of the meniscus has been reported in different studies.Evaluation of lateral and medial meniscus development, typing and the relationship of the tibia during the foetal period.Anatomical dissection.We evaluated 210 knee menisci obtained from 105 human foetuses ranging in age from 9 to 40 weeks\u2019 gestation. Foetuses were divided into four groups, and the intra-articular structure was exposed. We subsequently acquired images of the intra-articular structures with the aid of a millimetric ruler. The images were digitized for morphometric analyses and analysed by using Netcad 5.1 Software .The lateral and medial meniscal areas as well as the lateral and the medial articular surface areas of the tibia increased throughout gestation. We found that the medial articular surface areas were larger than the lateral articular surface areas, and the difference was statistically significant. The ratios of the mean lateral and medial meniscal areas to the lateral and medial articular surface areas, respectively, of the tibia decreased gradually from the first trimester to full term. The most common shape of the medial meniscus was crescentic (50%), and that of the lateral meniscus was C-shaped (61%).This study reveals the development of morphological changes and morphometric measurements of the menisci. The menisci are two cartilaginous structures that deepen the proximal articular surface of the tibia ,2,3. TheIn adults, the morphology of the tibia differs from that of the femur; therefore, the morphologies of the lateral and medial menisci (MM) are also expected to differ . StructuSecondary to the increase in the use of CT and MRI in research, there has been a surge in studies on the anatomy of and variations in the menisci in adults . HoweverTo clarify the probable developmental origin of the aetiology, investigation and quantitative examination of the normal development of knee joint morphology in the prenatal periods are essential; however, such studies related to human foetuses are limited. In the current study, we examined the morphological changes in the meniscus and the tibial plateau quantitatively in human foetuses. The aim of the study was to examine the areas of the menisci, the superior lateral and medial articular surface areas of the tibia and the ratios of the menisci to the corresponding areas of the articular surfaces of the tibia. We analysed the distances between the anterior and posterior horns of the lateral and MM, the distance between the anterior and posterior horns of each meniscus and the shapes of the menisci during the foetal period.This study was carried out on 210 knees of 105 human foetuses , aged between 9 and 40 weeks of gestation, obtained with consent from the families from Maternity and Children\u2019s Hospital between 1996 and 2011. Only foetuses without any external pathology or anomaly were used. The data collection procedure was approved by the Ethics Board of the Faculty of Medicine of the University. The post-mortem dissection procedures were ethically approved by the Turkish Ministry of Health and thus were in accordance with statutory regulations.Gestational ages of the foetuses were determined by using crown rump length, bi-parietal width, head circumference, femur length and foot length . FoetuseInitially, the knee region was anatomically dissected to expose the intra-articular contents. This was followed by image acquisition of the intra-articular structures with the aid of a millimetric ruler. The images were digitized for morphometric analyses and analysed by Netcad 5.1 Software . Areas of the lateral and MM and the superior lateral and medial articular surface areas of the tibia were measured . The ratWe morphologically classified the lateral and MM based on previous classifications in the literature ,8,9. TheIn the C-shaped type, the widths of the horns and body are similar, and the tips of the horns are rounded and close to each other. In the U-shaped type, the widths of the horns and body are similar, the tips of the horns are rounded and the gap between the horns is wide. The V-shaped type has a shape resembling the letter V. The incomplete discoid type has a deficiency at the centre and between the horns of the menisci. The complete discoid type has a defect at the centre of the meniscus and no gap between the horns.2) test was used to compare percent distributions among groups, and P and \u03c72 values are presented in the relevant tables of the Results section.SPSS (17.0) statistical package was used to compute the arithmetic means and standard deviations of the parameters . The level of significance was set at \u03b1=0.05. Parametric variables were expressed as mean \u00b1 standard deviation. A Student\u2019s t-test was used to compare the parametric variables between sexes and sides . One-way ANOVA and Bonferroni\u2019s post-test were used for comparisons between groups. For nonparametric data, a chi-square (\u03c7Mean LM areas were 6.44 mm\u00b2, 17.52 mm\u00b2, 42.71 mm\u00b2 and 59.36 mm\u00b2 in the first, second and third trimester and full-term groups, respectively. Mean MM areas were 5.79 mm\u00b2, 17.04 mm\u00b2, 40.44 mm\u00b2 and 60.38 mm\u00b2 in the first, second and third trimester and full-term groups, respectively. Mean lateral and medial meniscal areas increased throughout gestation, and there were significant differences between groups p<0.05, .Mean lateral articular surface areas of the tibia were 7.90 mm\u00b2, 22.41 mm\u00b2, 54.07 mm\u00b2 and 78.76 mm\u00b2 in the first, second and third trimester and full-term groups, respectively. Mean medial articular surface areas of the tibia were 9.19 mm\u00b2, 27.32 mm\u00b2, 65.43 mm\u00b2 and 101.76 mm\u00b2 in the first, second and third trimester and full-term groups, respectively. There was a significant increase in the areas of the lateral and medial articular surfaces of the tibia throughout gestation p<0.05, . When thThe ratio of the mean lateral meniscal area to the lateral articular surface areas of the tibia was computed. The ratio was 0.81 in the first trimester, 0.78 in the second trimester, 0.78 in the third trimester and 0.75 at full term. The ratio of the mean medial meniscal area to the medial articular surface areas of the tibia was 0.63 in the first trimester, 0.62 in the second trimester, 0.61 in the third trimester and 0.59 at full term .The distance between the anterior and posterior horns of the menisci was measured. The distance between the anterior and the posterior horns of the LM increased from 0.81 mm in first trimester to 1.84 mm in the second trimester, 2.60 mm in the third trimester and 3.60 mm at full term. The distance between the anterior and posterior horns of the MM was increased from 1.92 mm in first trimester to 3.55 mm in the second trimester, 5.57 mm in the third trimester and 7.23 mm at full term . ResultsWe also measured the distance between the anterior horns of the menisci as well as the distance between the posterior horns. The mean distance between the anterior horns was 1.97 mm in the first trimester, 3.71 mm in the second trimester, 5.97 mm in the third trimester and 8.02 mm at full term . The meaLateral and MM were classified morphologically ,5,6,7,8.Male and female comparisons were carried out for all parameters, and there were no significant differences between the right- and the left-sided parameters.Menisci develop by differentiation of the mesenchymal tissue at the lower limb bud. They appear during the fourth week of human development, become evident at 9 weeks and assume the adult form at 14 weeks of gestation ,10,13,14Studies employing different methods have been carried out on foetuses or adults to establish the type, area and location of the menisci. In a study carried out on 41 foetuses, Fukazawa et al. also repFukazawa et al. measuredWe measured the distances between the anterior and posterior horns of the lateral and MM separately. The distance between the anterior and posterior horns of the MM was larger than that of the LM. In other words, the tips of the horns of the MM were farther apart. Therefore, we did not observe any V-shaped or U-shaped LM, nor did we observe complete or incomplete discoid type MM. We also measured the respective distances between the anterior and posterior horns of the menisci. The mean distance between the posterior horns was 2.66 mm, whereas the mean distance between the anterior horns was 4.71 mm, showing that the posterior horns are closer to each other than the anterior horns. A literature review did not reveal any studies in which the distance between the horns on both sides was measured. As such, this is the first study of its kind in the literature and would serve as a database for future studies.The variations observed in menisci can be explained by their patterns of development in the embryologic and foetal periods ,17. The When all these studies were reviewed together, the MM and LM were found to have five and four different shapes, respectively ,8,9 Fig, Table 5Discoid meniscus was first defined in 1889, and it has been argued that it is observed only in the LM ,6,8,9,19Complete and incomplete discoid types have been reported only in the LM in previous studies ,9. Our sIn conclusion, we believe that the results of the present study enable us to fully understand the pathologies and anomalies of the menisci and contribute to diagnosis and treatment of these conditions as well as future scientific studies."} +{"text": "Elucidating arbuscular mycorrhizal (AM) fungal responses to elevation changes is critical to improve understanding of microbial function in ecosystems under global asymmetrical climate change scenarios. Here we examined AM fungal community in a two-year reciprocal translocation of vegetation-intact soil blocks along an altitudinal gradient in an alpine meadow on the Qinghai-Tibet Plateau. AM fungal spore density was significantly higher at lower elevation than at higher elevation regardless of translocation, except that this parameter was significantly increased by upward translocation from original 3,200\u2009m to 3,400\u2009m and 3,600\u2009m. Seventy-three operational taxonomic units (OTUs) of AM fungi were recovered using 454-pyrosequencing of 18S rDNA sequences at a 97% sequence similarity. Original elevation, downward translocation and upward translocation did not significantly affect AM fungal OTU richness. However, with increasing altitude the OTU richness of Acaulosporaceae and Ambisporaceae increased, but the OTU richness of Gigasporaceae and Glomeraceae decreased generally. The AM fungal community composition was significantly structured by original elevation but not by downward translocation and upward translocation. Our findings highlight that compared with the short-term reciprocal translocation, original elevation is a stronger determinant in shaping AM fungal community in the Qinghai-Tibet alpine meadow. Elucidating the biodiversity patterns along altitudinal gradients is fundamental to understanding the community assembly and ecosystem functioning135673410111281415Global climate change is one of the greatest challenges facing our society, and global surface temperature is predicted to increase by 1.8\u20133.6\u2009\u00b0C over the next century19212324252625e.g. cloudiness, atmospheric density, absolute O2 or CO2 concentration, ultraviolet radiation, and soil moisture), temperature is considered to be the key driver of variation in ecological processes 203132293335Reciprocal translocation experiments that exchange the whole soil-plant along altitudinal gradients provide relatively natural gradient warming and cooling processes, without need for a high energy supply29To better understand the effects of original elevation, warming and cooling resulted from elevation changes on AM fungal community, we measured AM fungal spore density under a two-year reciprocal translocation of vegetation-intact soil blocks along an altitudinal gradient in an alpine meadow ecosystem on the Qinghai-Tibet Plateau. The AM fungal community composition in soil was examined using 454 pyrosequencing of 18S rDNA sequecnes. The aim of this study was to investigate how the spore density, diversity and community composition of AM fungi change under reciprocal translocation along an altitudinal gradient in an alpine meadow ecosystem on the Qinghai-Tibet Plateau. The expected results may provide new insights into how AM fungal community would give feedback on global climate change scenarios in the alpine meadow ecosystem.i.e. translocation to the same elevation) , but marginally affected by translocation . For example, the AM fungal spore density was significantly higher at original 3,200\u2009m and 3,400\u2009m than at original 3,600\u2009m and 3,800\u2009m based on a 97% sequence similarity. Of these 305 OTUs, 73 belonged to AM fungi. As the number of AM fungal reads ranged from 144 to 3,249 among the samples, the read numbers were normalized to 144, resulting in a normalized dataset containing 73 AM fungal OTUs .Glomus, four Funneliformis, two Sclerocystis, two Rhizophagus, and 11 unidentified genus), nine to Gigasporaceae (nine Scutellospora), seven to Acaulosporaceae (seven Acaulospora), six to Ambisporaceae (six Ambispora), five to Claroideoglomeraceae (five Claroideoglomus), five to Diversisporaceae , four to Archaeosporaceae (four Archaeospora), two to Paraglomeraeae (two Paraglomus), one to Entrophosporaceae (one Entrophospora), and one to Pacisporaceae (one Pacispora). A rarefaction analysis showed that almost 16 rarefaction curves for observed AM fungal OTUs representing 16 treatments within four original elevations tended to reach the saturation platform, indicating that sequencing effort was sufficient to identify the most AM fungi in this study soil samples, and the 25 least frequent OTUs in \u22645 (10.4%) soil samples . The firis study .P\u2009>\u20090.05; F\u2009=\u200924.980, P\u2009<\u20090.001), Ambisporaceae , Gigasporaceae , and Glomeraceae . For example, the OTU richness of Acaulosporaceae and Ambisporaceae were significantly lower at original 3,200\u2009m than at original 3,400\u2009m, 3,600\u2009m and 3,800\u2009m, but no significant difference between original 3,400\u2009m and 3,800\u2009m was observed were indicators in 3,200\u2009m, two (Glomus and Scutellospora) in 3,400\u2009m, six (four Ambispora and two Acaulospora) in 3,600\u2009m, and four in 3,800\u2009m revealed that original elevation, translocation and their interaction did not significantly affect AM fungal OTU richness , but not by translocation and interaction between original elevation and translocation . Nonmetric multidimensional scaling (NMDS) analysis also showed that the AM fungal community composition was significantly affected by original elevation, but not by translocation and soil moisture after translocation, and was marginally related to soil TOC , NH4+\u2013N and NO3\u2212\u2013N showed that the AM fungal community composition was significantly affected by original elevation (location . Further=\u20090.083) . Variati=\u20090.083) . Of thes=\u20090.083) .The AM fungal spore density was significantly higher at lower elevation than at higher elevation in this study. Similarly, a decrease trend in AM fungal spore density along an altitudinal gradient was found in another alpine meadow on the Qinghai-Tibet Plateau373839Acaulospora and Ambispora had high abundance at 3400\u2009m and 3600\u2009m sites in this study are known to be active under cooler but dormant during warmer conditions4143e.g. Ambispora, Glomus, Redeckera, and Scutellospora) in abundance were decreased, but others in abundance were increased 16The AM fungal OTU richness was not significantly affected by original elevation, downward translocation and upward translocation in the current study. Consistently, there was no significant difference in AM fungal OTU richness within roots of By contrast, we observed that the OTU richness of families Gigasporaceae and Glomeraceae was significantly higher at lower elevation than at higher elevation sites. Meanwhile, indicator species analyses showed that the abundance of 12 OTUs of these two families was significantly higher at 3,200\u2009m than at >3,400\u2009m in this study. Similarly, Gigasporaceae fungi were mainly found at 3,300\u2009m and 3,500\u2009m, but unable to be detected above 3,700\u2009m in the Puna mountain grassland in Argentina678The AM fungal community composition was significantly affected by original elevation in the present study. The distinct AM fungal community compositions were also found along altitudinal gradients on the basis of analysis of spore morphology8112413141527In conclusion, the AM fungal spore density was significantly affected by original altitude but not by two-year reciprocal translocation, except for a significant increase by translocation from original 3,200\u2009m to 3,400\u2009m and 3,600\u2009m sites. The OTU richness of AM fungi was not significantly affected by original elevation and reciprocal translocation. However, with increasing altitude the OTU richness of Acaulosporaceae and Ambisporaceae increased, whereas this parameter of Gigasporaceae and Glomeraceae declined generally. The AM fungal community composition was significantly shaped by original elevation, but not by reciprocal translocation. Our results suggested that original elevation, rather than short-term reciprocal translocation is a strong determinant in structuring AM fungal community in the alpine meadow on the Qinghai-Tibet Plateau. However, we realize that only two-year experiment was carried out in this study, which may be not completely reflecting the effect of long-term global climate change on mycorrhizal community, thus a longer experimental period should be needed to better unveil this issue in further study.et al.Kobresia humilis, Elymus nutans, Poa spp., Carex spp., Scripus distigmaticus, Gentiana straminea, G. farreri, Leontop odiumnanum, and Potentilla nivea. The plant community at 3,400\u2009m is dominated by alpine shrub Potentilla fruticosa and jointly by Kobresia capillifolia, K. humilis, and Saussurea superba. The plant community at 3,600\u2009m site is dominated by K. humilis, Saussurea katochaete, P. nivea, Thalictrum alpinum, Carex spp., Poa spp., and P. fruticosa. The plant community at 3,800\u2009m is dominated by K. humilis, L. odiumnanum, and Poa spp.The study was conducted at the Haibei Alpine Meadow Ecosystem Research Station (HAMERS) of the Chinese Academy of Sciences, along an altitudinal gradient in the south slope of Qilian Mountains on the Qinghai-Tibet Plateau, China. The climate at HAMERS is highland continental, characterized to be cold and long in winter but warm and short in summer. This experiment was established in 2007 by Wang i.e. 30\u2009cm in depth at 3,800\u2009m due to shallower soil layer) with attached vegetation were excavated from each of the four altitudes and reciprocally transferred across the altitudinal gradient after the soil started to thaw in early May 2007. Among the transferred soil blocks, three blocks from each altitude were also removed and then reinstated at the same site as control blocks that had been handled as similarly as possible as those blocks moved to other elevations. As such, there were three replicate transfers from each altitude, and these intact soil blocks were fully randomized throughout the study site. The distance between adjacent blocks was ca. 0.6\u2009m in each plot.Twelve intact soil blocks were randomly collected from each block and mixed as one composite sample. A total of 48 soil samples were used in this study. The soil samples were immediately packed in an ice box and transported to our laboratory. Fresh soil samples were sieved through 1\u2009mm sieve to remove roots and debris. Subsoil samples for DNA extraction were stored at \u221280\u2009\u00b0C until analysis. Subsoil samples for AM fungal spore density were air-dried and stored at 4\u2009\u00b0C until analysis. Subsoil samples for soil variables including pH, moisture, temperature, TOC, total N, NOAM fungal spores were extracted from 20.0\u2009g air-dried soil of each sample with deionized water using the wet-sieving and decanting method2, 200\u2009\u03bcM of each dNTP, 0.75\u2009\u03bcM of each primer, 1.5 U Taq polymerase , and ca. 10\u2009ng of template DNA combined with sterile deionized water. The thermal cycling was followed by an initial denaturation at 94\u2009\u00b0C for 5\u2009min, 35 cycles of denaturation at 94\u2009\u00b0C for 45\u2009s, annealing at 54\u2009\u00b0C for 1\u2009min, and extension at 72\u2009\u00b0C for 1.5\u2009min, followed by a final extension at 72\u2009\u00b0C for 10\u2009min. The product of the first amplification was diluted with sterilized deionized water by a factor of 20 and 1.0\u2009\u03bcL diluted solution was used as the template for the nested PCR. Conditions for the nested PCR were similar to the first PCR, except for 58\u2009\u00b0C annealing temperature, 30 cycles, and primers NS31\u22121. The pooled product was subjected to 454 pyrosequencing on a Roche Genome Sequencer FLX Titanium . The representative 18S rDNA sequences obtained in this study have been submitted to the European Molecular Biology Laboratory (EMBL) nucleotide sequence database with the accession numbers LT576043\u2013LT576115.Genomic DNA was extracted from 0.5\u2009g frozen soil using a direct bead-beating extraction method with a PowerSoil DNA isolation kit according to the manufacturer\u2019s instruction. Genomic DNA for 454 pyrosequencing was amplified using a two-step PCR procedure. The first amplification with primers GeoA-2th base pair (bp), the remaining longer sequences were chopped to 400\u2009bp to assure read qualityAM databasep-distance model with 1,000 replicates to produce bootstrap values in MEGA 5\u2009EstimateS v.9The noise generated during sequencing process was removed using the shhh.flow command in Mothur 1.31.2P\u2009<\u20090.05. For the data that did not satisfy homogeneity of variance amongst treatments, nonparametric Kruskal\u2013Wallis test was applied to examine the effect of original elevation and translocation. Moreover, if AM fungal variables did not significantly differ among the translocation treatments within each original elevation, these data were pooled in accordance with original elevation and then subjected to Tukey\u2019s HSD tests (homogeneity of variance) or pairwise comparisons (heterogeneity of variance) at P\u2009<\u20090.05 to reveal the difference among original elevations. In order to determine AM fungal indicator species for each original elevation, we conducted indicator species analysis AM fungal spore density is defined as spore numbers per gram air dried soil in a sample. The frequency of a specified AM fungal OTU is defined as the percentage of the number of samples where this OTU observed to the number of all samples. The abundance of a given AM fungal OTU is defined as the read numbers of that OTU in a sample. The relative abundance of a specific AM fungal OTU is defined as percentage of the number of reads where OTU was detected to the number of all reads in a sample. AM fungal OTU richness is defined as OTU numbers in a sample, and the richness of a given family is all OTU numbers of that family in a sample. A two-way ANOVA was used to examine the effects of original elevation, translocation and their interaction on AM fungal spore density and richness if the data satisfied the normality of distribution and homogeneity of variance amongst 16 treatments before and after sqrt and log transformation were carried out. Significant differences between treatments were further compared using Tukey\u2019s honest significant difference (HSD) tests at 3\u2212\u2013N, and NH4+\u2013N), and plant were constructed by calculating dissimilarities using the Bray\u2013Curtis method65 to evaluate the effects of original elevation, translocation and their interaction on AM fungal community composition. Subsequently, the AM fungal community composition was ordinated using NMDS with the dissimilarity matrices using the \u2018metaMDS\u2019 function in the vegan65. Mantel tests were applied to explore correlations between pairs of dissimilarity matrices, partial Mantel tests to explore the independent effects of every soil and plant variables on the AM fungal community composition in the ecodist65 was used to partitioning the variation of AM fungal community dissimilarity by original elevation, translocation, soil , and plant variables.The distance matrices of AM fungal community composition (based on OTU relative abundance), soil .Publisher\u2019s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations."} +{"text": "Nardostachys chinensis. Although our previous study reported that the NAB suppressed the production of nitric oxide (NO) in lipopolysaccharide (LPS)-activated RAW264.7 cells, the specific mechanisms of anti-inflammatory action of NAB remains unknown. Thus, we examined the effects of NAB against LPS-induced inflammation. In this study, we found that NAB suppressed the LPS-induced inflammatory responses by restraining the expression of inducible nitric oxide synthase (iNOS) proteins and mRNA instead of cyclooxygenase-2 (COX-2) protein and mRNA in RAW264.7 cells, implying that NAB may have lower side effects compared with nonsteroidal anti-inflammatory drugs (NSAIDs). Besides, NAB upregulated the protein and mRNA expressions of heme oxygenase (HO)-1 when it exerted its anti-inflammatory effects. Also, NAB restrained the production of NO by increasing HO-1 expression in LPS-stimulated RAW264.7 cells. Thus, it is considered that the anti-inflammatory effect of NAB is associated with an induction of antioxidant protein HO-1, and thus NAB may be a potential HO-1 inducer for treating inflammatory diseases. Moreover, our study found that the inhibitory effect of NAB on NO is similar to that of the positive drug dexamethasone, suggesting that NAB has great potential for developing new drugs in treating inflammatory diseases.Nardochinoid B (NAB) is a new compound isolated from Inflammation is a kind of defensive reaction of living organisms with vascular systems to harmful factors such as pathogens, damaged cells, and irritants . The inf2 (PGE2), and nitric oxide (NO) [Macrophages play important roles in the innate immune response. They protect cells from injury induced by exogenous factors such as bacteria and viruses and endogenous factors such as other damaged cells. Also, macrophages promote the repair processes of tissue injury . Proinflide (NO) ,12,13,14ide (NO) . Thus, iide (NO) , was choNardostachys chinensis is one of the traditional Chinese medicines that was reported to have an anti-inflammatory effect [N. chinensis have been used for the treatment of blood disorders, disorders of the circulatory system, and herpes infection [N. chinensis were reported to inhibit the protein expression of inducible nitric oxide synthase (iNOS) and cyclooxygenase-2 (COX-2) in LPS-activated RAW264.7 macrophages [N. chinensis. Our previous research has proved that NAB inhibits the production of NO in the LPS-induced RAW264.7 macrophages [In recent years, there has been growing interest in the anti-inflammatory effects of natural components present in commonly used traditional herbal medicines. y effect . The extnfection . Recentlrophages ,19,20. NThe progression of inflammation could be inhibited through activating the nuclear factor erythroid 2-related factor 2 (Nrf2) pathway, meaning that activating the Nrf2 pathway could be a potential therapeutic strategy in anti-inflammatory disorders . The traIn the present study, we have focused on these certain aspects of NAB: (1) whether NAB has the ability to suppress the LPS-induced inflammatory responses in RAW264.7 cells, and (2) whether NAB upregulates HO-1 to promote its anti-inflammatory effects by activating the Nrf2 signaling pathway. The results in this study revealed that NAB exerted its anti-inflammatory effects in LPS-induced RAW264.7 cells in a manner related to the activation of the Nrf2/HO-1 pathway, rather than the inhibition of the nuclear factor-\u03baB (NF-\u03baB) pathway and mitogen-activated protein kinase (MAPK) pathway.2 in the culture medium of the RAW264.7 cells were significantly increased (P < 0.01) after 18 h of LPS stimulation. The pretreatment with NAB markedly decreased the LPS-induced NO production in a concentration-dependent manner for 18 h . NAB marThe results show thaAs shown in The results show that the LPS stimulation of RAW264.7 cells increased the expression levels of TNF-\u03b1 A, IL-1\u03b2 As shown in As described before, macrophages play an important role in inflammation as they are able to release different kinds of cytokines to ignite inflammatory reactions . The LPS2 by the LPS-induced RAW264.7 cells, we examined the effect of NAB on the expression of iNOS and COX-2 by LPS-stimulated RAW264.7 macrophages, since iNOS and COX-2 are the enzymes responsible for the production of NO and PGE2, respectively. After that, the expression level of HO-1 in LPS-stimulated RAW264.7 cells was detected with the treatment of NAB, because HO-1 is one of the regulating factors of the expression of iNOS. As the translocation of Nrf2 protein into the cell nucleus mediates the expression of HO-1, the migration level of Nrf2 protein in the RAW264.7 macrophages was evaluated. Moreover, as the macrophages release cytokines [In this study, we firstly evaluated the cytotoxicity of NAB and found that NAB had no significant cytotoxicity to LPS-stimulated RAW264.7 cells at the concentrations lower than 20 \u03bcM A,B. Thusnd IL-6) to promo2 are two of the most important inflammatory mediators that participate in inflammatory processes. The inflammation and the exposure of tissue cells to bacterial products such as LPS, lipoteichoic acid (LTA), peptidoglycans, and bacterial DNA or whole bacteria will induce the high expression of iNOS and then enhance the production of NO. In these situations, the NO forms peroxynitrite, which acts as a cytotoxic molecule, resists invading microorganisms, and acts as a killer [2 mediates the increasing of arterial dilation and microvascular permeability. This action will cause blood to flow into the inflamed tissue and thus causes redness and edema [2, and it also regulates the synthesis of prostaglandin I2 and thromboxaneA2 (TXA2) [2 is the major cyclooxygenase product in platelets. It is also a potent vasoconstrictor and can stimulate the aggregation of platelets in vitro. PGI2 is produced and synthesized in vascular endothelial cells. It is a vasodilator and inhibitor of platelets [2 and TXA2, leading to cardiovascular risks [NO and PGEa killer . Howevera killer . In thesa killer . PGE2 mend edema . COX-2 b2 (TXA2) . It is klatelets . It has ar risks . FortunaTNF-\u03b1, IL-1\u03b2, and IL-6 belong to the inflammatory cytokines and are can also be involved in inflammatory processes . In this2, iNOS, COX-2, TNF-\u03b1, and IL-6 production [N. chinensis inhibited the p38 MAPK pathway to inhibit the expression of inflammatory mediators [Usually, the inflammatory processes are accompanied by the activation of the NF-\u03baB pathway, which also promotes the expression of inflammatory mediators in macrophages . Previouoduction ,43. Otheediators . To studThe activation of the Nrf2 pathway is another possible way to prevent LPS-induced transcriptional upregulation of proinflammatory cytokines, including TNF-\u03b1, IL-1\u03b2, and IL-6 . These i2 -10. IL-10 induces the phosphorylation of Janus Kinase (Jak) 1 and the activation of signal transducer and activator of transcription (STAT)-1 and STAT-3 . Also, ITaken together, the results suggest that the activation of the Nrf2/HO-1 pathway is the potential mechanism by which NAB exerts its anti-inflammatory effects against LPS-activated inflammation .2 was from Cayman Chemical . The nitric oxide (NO) production level was measured by a Griess Reagent System kit, which was obtained from Promega Corporation .Nardochinoid B (NAB) (HPLC pul-glutamine (2 mM) was chosen to maintain the cells. The cells were incubated at 37 \u00b0C in a humidified atmosphere containing 5% CO2 and 95% air.The immortalized mouse macrophage cell line RAW264.7 was obtained from the American Type Culture Collection . Dulbecco\u2019s modified Eagle\u2019s medium (DMEM) supplement with 10% heat-inactivated fetal bovine serum (FBS) , penicillin G (100 units/mL), streptomycin (100 mg/mL), and 4 cells/well and were incubated for 24 h. After incubation, the cells were pretreated with different concentrations of NAB for 1 h. Then, the cells were stimulated with or without LPS (100 ng/mL) for 18 h. Cytotoxicity was analyzed by using MTT assay. MTT solution (5 g/L) was added to each well and incubated for 4 h at 37 \u00b0C. Then, 100 \u03bcL 10% sodium dodecyl sulfate (SDS)\u2013HCl solution was added to the wells and incubated for another 18 h. The optical density was read at 570 nm using a microplate UV/VIS spectrophotometer . The control group, in which the cells were not treated with compounds and LPS, was set as 100% for its cell viability.The cells were seeded in 96-well plates at the density of 1.4 \u00d7 104 cells/well and were incubated for 24 h. Then, the cells were pretreated with different concentrations of NAB and positive control drug for 1 h, respectively. LPS (100 ng/mL) was added to the culture medium and the cells were stimulated with LPS for another 18 h. After incubating the cells with drugs and LPS, the cells and the medium were collected and stored at \u221280 \u00b0C. NO production was measured as the nitrite concentration in the medium by the Griess reagent . The TNF-\u03b1, IL-1\u03b2, and IL-6 concentrations in the culture medium were measured by using the enzyme-linked immunosorbent assay (ELISA) kit , and the PGE2 concentration in the cell supernatant was detected by the ELISA kit from Cayman Chemical .The cells were plated in 24-well plates at the density of 8 \u00d7 10The cells in 24-well plates were collected after being treated with drugs and LPS for 6 h (for HO-1 proteins) or 18 h (for other inflammation-related proteins). RIPA lysis buffer was mixed with 1\u00d7 protease inhibitor and the mixture was used to lyse the collected cells to extract total protein. For the measurement of Nrf2 protein, cells were treated with NAB (10 \u03bcM) and SFN (5 \u03bcM) for 6 h, and then the NE-PER Nuclear and Cytoplasmic Extraction Reagents were used to extract the cytoplasmic and nuclear extracts. The protein concentration was determined with the Bio-Rad Protein Assay . Thirty micrograms of these protein samples was resolved by 6% (for Nrf2 measurement), 10% , and 12% (for HO-1 measurement) sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE). After electrophoresis separation, the proteins were transferred from the gel onto nitrocellulose membrane . Then, the membrane was blocked with 5% skimmed milk and then incubated with the primary antibodies and mouse antibodies specific for \u03b2-actin , \u03b1-tubulin (for p-ERK and p-p38 measurements), and laminin B1 (for Nrf2 measurement) at 4 \u00b0C overnight. After that, the membrane was incubated with IRDye 800CW goat anti-mouse IgG (H + L) or IRDye 800CW goat anti-rabbit IgG (H + L) secondary antibodies at room temperature for 1 h. The antigen\u2013antibody complex bands were examined with an Odyssey CLxImager and the protein expression level was quantified by using Odyssey v3.0 software . The density ratios of iNOS, COX-2, HO-1, Nrf-2, p-ERK, p-p65, and p-p38 to \u03b2-actin, \u03b1-tubulin, or laminin B1 were calculated for evaluating the anti-inflammatory effect and underlying mechanism of NAB.\u2212\u2206\u2206Ct cycle threshold method was used to normalize the relative mRNA expression levels to the internal control. The primers used in this study are listed in The cells in 24-well plates were collected after being treated by tested drugs and LPS for 6 h (for the HO-1 test) or 18 h . Total RNA was isolated from cells with the NucleoSpin RNA kit . The total RNA concentration for each sample was detected by using a NanoDrop spectrophotometer . One microgram of total RNA of each sample were used for reverse transcription into cDNA by using the reverse transcription Universal cDNA Master Kit . Target RNA levels were determined by using ViiATM 7 real-time PCR, where 1 \u03bcL cDNA, 2 \u03bcL primers, 10 \u03bcL SYBER Green PCR Master Mix , and 7 \u03bcL PCR-grade water were used in the PCR reaction. The denaturation step of the PCR reactions was set to 95 \u00b0C for 10 min. Forty cycles were repeated at 95 \u00b0C for 15 s and 60 \u00b0C for 1 min. The 2P < 0.05 was considered statistically significant.All data are presented as the mean \u00b1 SEM of three independent experiments. The statistical analyses for these results were carried out with GraphPad Prism 7 by using one-way ANOVA followed by post-hoc analysis with Tukey\u2019s multiple comparison test to compare the difference between groups. In all cases, a level of From the above study, it has been proven that the compound NAB inhibited the activation of LPS-induced RAW264.7 cells. It is clear that NAB increased the expression of HO-1 to reduce NO production. Also, inflammatory mediators, including NO, TNF-\u03b1, IL-1\u03b2, and IL-6, were inhibited by the pretreatment of NAB. More importantly, the study found that NAB has no inhibitory effect on COX-2, suggesting that it may be safer than NSAIDs. At the same time, our study found that the inhibitory effect of NAB on NO is similar to that of the positive control drug DEX, suggesting that NAB has great potential for future drug development. In conclusion, NAB may be a potential HO-1 inducer for the treatment of inflammatory diseases.Nardostachys chinensis and evidence for the use of this recently discovered natural compound in the treatment of diseases related to inflammation and oxidative stress.As described previously, the results suggested that NAB exerted its anti-inflammatory effects against LPS-induced inflammation via activating the Nrf2/HO-1 pathway . Also, t"} +{"text": "Operators of Unmanned Aerial Systems (UAS) face a variety of stress factors resulting from both the cognitive demands of the work and its broader social context. Dysfunctional metacognitions including those concerning worry may increase stress vulnerability, whereas personality traits including hardiness and grit may confer resilience. The present study utilized a simulation of UAS operation requiring control of multiple vehicles. Two stressors were manipulated independently in a within-subjects design: cognitive demands and negative evaluative feedback. Stress response was assessed using both subjective measures and a suite of psychophysiological sensors, including the electroencephalogram (EEG), electrocardiogram (ECG), and hemodynamic sensors. Both stress manipulations elevated subjective distress and elicited greater high-frequency activity in the EEG. However, predictors of stress response varied across the two stressors. The Anxious Thoughts Inventory AnTI: , 1994 wa Individual differences in resilience and stress vulnerability have profound personal consequences for life outcomes such as career success, personal relationship quality, and mental health. Recent work has demonstrated the complexity of resilience, which depends on multiple personality traits whose influence on stress outcomes varies across different demanding contexts . The prePersonality traits for emotional vulnerability and resilience can be broadly divided into maladaptive traits that amplify harmful impacts of stressors and adaptive traits that support effective coping. Beyond broad traits such as neuroticism, theoretical considerations suggest a focus on dispositional worry and metacognition. Specifically, the Self-Regulatory Executive Function (S-REF) theory , 2015 deDispositionally worry-prone individuals are vulnerable to the CAS and states of worry in performance settings . DisposiEvidence from both experimental and correlational studies demonstrates the role of metacognitions in acute stress in non-clinical samples. In correlational studies, dysfunctional metacognitions have been associated with test anxiety and maladaptive coping , perceivThe current study focuses on stress response during performance of a multi-component cognitive task. A basic challenge in identifying the role of metacognition in this context is the complexity of individual differences in stress response. Resilience traits additional to metacognitive factors may also influence response. Furthermore, the nature of the stressor may moderate the relationship between traits for resilience and stress outcomes. Findings may depend too on the stress outcome measure examined. For example, psychophysiological measures can pick up stress responses of which the person is not consciously aware . MatthewThe present study investigated the stress of operating multiple UASs, aerial vehicles controlled remotely for purposes including reconnaissance and surveillance. Current military and civilian operations typically involve a two- or three-person team controlling the vehicle; in the future a single operator will control multiple vehicles with assistance from automation . StressoPerformance stress is expressed in various ways, through subjective experience, changes in neural functioning, and objective performance impairment. Subjective states experienced in performance environments may be assessed using the Dundee Stress State Questionnaire DSSQ: , 2013. IThe TSO framework assumes that multiple traits may moderate stress response, depending on the context. Traits for resilience refer to focus on positive qualities supporting coping, whereas stress vulnerability traits define qualities such as worrying that are detrimental to coping. Broad trait models typically characterize positive and negativity emotionality dimensions as largely independent , but it The construct of hardiness as a general trait for resilience emerged from studies of personality traits that might buffer the health impacts of life stressors . Scales Definitions of grit focus on long-term persistence and maintenance of motivation during adversity , but thiThere has been rather little research on the relationship between dispositional worry, metacognition, resilience, and stress responses in complex, demanding performance environments. This lack of evidence represents a limitation of both CAS and TSO models. In the current multi-UAS control task, the participant must guide vehicles to target locations and photograph them while monitoring for vehicle health and avoiding areas of danger. In the present study, a within-subjects design was used. All participants performed under both stressors, as well as in two control conditions, one prior to each stressor . We aimed to test whether traits for stress vulnerability and resilience predicted physiological and subjective responses, utilizing a suite of sensors previously applied across a range of demanding task environments . We admiStress responses in demanding performance environments change dynamically throughout the test session . People We expected that both stress manipulations would elevate subjective distress and psycMetacognitive factors correlate with perceived stress in the absence of an overt stressor , and traWe tested whether traits would predict stress response over and above any associations evident in the control conditions. To do this, we computed measures of stress reactivity specific to each stressor. We expected that the AnTI would predict subjective and physiological responses to negative feedback more strongly than responses to cognitive demand, because feedback is more likely to activate the CAS due to its higher self-relevance. Accounts of hardiness and grit do not clearly link these qualities to specific stressors so their associations with reactivity were investigated on an exploratory basis.Worry states are broadly if modestly detrimental to performance , but recMage: 19.3 years) at the University of Central Florida. They received course credit for participation. Participants were excluded if they reported current or recent treatment for any emotional disorder, eating disorder, schizophrenia or other psychosis, stress or any related emotional condition. Those currently taking psychoactive medications were also excluded.Participants were 68 undergraduate students , health worry , and meta-worry . Subscale alpha coefficients quoted by This measure of resilience has 30 items, answered on 4-point response scales. The subscales are commitment , and control . This questionnaire includes 12 items, answered on 5-point response scales, which assess capacity to sustain effort and interest in demanding activities . Scale alphas in four samples ranged from 0.73 to 0.83.The short, 21-item version of the DSSQ assesses subjective state responses related to task engagement , distress , and worry . Items are answered on 4-point scales. Scale alphas range from 0.78 to 0.83 .This workload measure requires the respondent to use 0\u2013100 scales to rate 6 sources of task load . Overall workload is calculated as an average of ratings, with performance reverse scored. The scale authors reported a test-retest reliability of 0.83.A suite of sensors used in previous studies recorded multiple psychophysiological responses. Brief descriptions are given here: see previous reports for further detail . MultiplThe ABM B-Alert X10 system assessed nine channels of EEG. Following filtering and artifact removal, spectral power was averaged across three frontal sites for theta (4\u20138 Hz), alpha (9\u201313 Hz), beta (14\u201330 Hz), and gamma (30\u2013100 Hz) bandwidths. EEG data were analyzed as percent change from baseline.The ABM System B-Alert X10 system also recorded ECG. Mean Inter-Beat Interval (IBI) and Heart Rate Variability (HRV) were recorded. IBI was analyzed as percent change from baseline for each task condition. HRV was calculated as the SD of all beats (measured in ms) during each condition.2) during each condition was calculated as the percent change from baseline.Hemodynamic changes in the left and right hemispheres of the prefrontal cortex were measured using Somanetics\u2019 INVOS Cerebral/Somatic Oximeter. The fNIR method analyzes the spectral absorption of NIR light by brain tissue. Regional oxygen saturation in the left and right hemisphere middle cerebral arteries was measured using Spencer Technologies\u2019 ST3 Digital Transcranial Doppler system. The system transceiver emits ultrasound pulses that are reflected back to the sensor from the moving blood cells; velocity is calculated from analysis of the Doppler shift in frequency. CBFV was calculated as the percent change from baseline.We used the Java-based \u201cResearch Environment for Supervisory Control of Heterogeneous Unmanned Vehicles\u201d (RESCHU) multi-UAV simulator developed by the Human and Automation Lab at the Massachusetts Institute for Technology . Full denegative feedback stressor condition, the same task configuration was used, but scripted feedback referring to participants\u2019 performance was provided in the mission window every 30 s. Approximately two-thirds of the feedback statements were negative ; the remainder were neutral (\u201cYou are performing adequately\u201d). Messages were presented in a pseudo-random sequence unrelated to actual performance. This manipulation was expected to activate the CAS in vulnerable individuals. In the cognitive demand stressor condition, cognitive demands were increased by increasing the number of UASs, the numbers of targets and hazards, and decreasing the time for which each target was available. In this condition, participants controlled six UASs, and with 18 targets and 14 hazards consistently present on the screen. Targets expired after 45 s, hazards after 5 s. Measures of performance effectiveness were (1) the command ratio, the number of targets engaged divided by the number of targets assigned, and (2) search accuracy, the number of objects located divided by the number of targets engaged. We also assessed (3) waypoints added, the total number of waypoints set in routing vehicles to targets.Stress manipulations were similar to those used by Following an informed consent interview, participants completed questionnaires including the AnTI, Hardiness and Grit scales, and pre-task DSSQ. The physiological sensors were then attached and data recording quality was verified. Participants watched a blank screen for 5 min during which baseline physiological measures were secured. Participants then received training on the task. They viewed a Powerpoint slideshow which explained the nature of the task and then practiced on the lower cognitive demand version of the task. Performance was monitored by the experimenter to ensure participant competence was sufficient to move onto the main part of the task. Participants then performed a sequence of four trials in one of two orders; either control \u2013 negative feedback \u2013 control \u2013 high demand or control \u2013 high demand \u2013 control \u2013 negative feedback. Thus, each stressor was preceded by its own control condition. Order was counterbalanced across participants. Stressor trials were 10 min in duration; control trials were 5 min. After each trial, the participant completed the NASA-TLX and a post-task DSSQ. Finally, physiological sensors were removed and participants were debriefed.The study provided an extensive data set. Thus, analyses were targeted to address the four research issues previously identified, and they are presented as follows. First, we verified that the two stressors were effective in eliciting stress responses, and we ran ANOVAs to test whether they elicited different patterns of stress response. Second, we computed correlations between the various traits and stress states in relatively undemanding conditions, i.e., at baseline and in control conditions. This analysis tested whether the AnTI correlates with stress in the absence of an overt stressor. Third, we computed correlations between traits and the stress reactivity measures for the cognitive demand and negative feedback conditions, testing whether the AnTI specifically predicts stress response to feedback, as hypothesized. Fourth, we focused in on the role of meta-worry as a moderator of responses to negative feedback. We used a regression approach to test for interactions between AnTI meta-worry and subjective worry state in predicting objective performance and physiological outcomes, testing for whether meta-worry controls whether or not worry states are maladaptive.Dependent stress response measures were the three DSSQ scales, NASA-TLX workload, and the psychophysiological measures from EEG, ECG, fNIR and TCD. A 2 \u00d7 2 (stress level \u00d7 stress type) repeated measures ANOVA was run for each one. A significant main effect of stress level, with no interaction, implies that both stressors influenced the response measure. A significant interaction indicates a differential effect of stressors on the measure. The significant effects in this analysis are summarized in M = 0.57, SD = 0.10) compared to the negative feedback condition , the control condition for high demand , and the control condition for negative feedback . Search accuracy (proportion correct) was lower in both the high demand condition and in the negative feedback condition relative to the respective control conditions . The number of waypoints set was higher in the high demand condition than in the negative feedback condition , or in the two respective control conditions . This last effect primarily reflects the need to set more waypoints when there are larger number of vehicles to direct.A similar analysis of performance measures showed significant stressor effects on all three performance measures. The command ratio was lower in the high demand condition , 0.66 (distress), and 0.73 (worry), showing individual differences were fairly consistent across the two conditions. The AnTI scales remained significantly positively correlated with state worry, but associations with task engagement were non-significant. DSSQ correlates of grit and hardiness were similar to those at baseline, with some differences in detail; for example, in the control conditions, both traits were significantly negatively correlated with state worry. Correlations between the trait scales and psychophysiological measures in the control conditions were also calculated but significant associations were few, and did not suggest any clear relationship between the traits and stress responses (data are available from the authors on request).We calculated residualized indices of reactivity by regressing each subjective and physiological stress response measure for the two stressor conditions against the same measure in the matched control condition. For example, state worry for the negative feedback condition was regressed against state worry in the preceding control condition, and the standardized residual was calculated. The residual expresses the extent to which the measure is higher or lower than its value in the control condition predicts. Cross-stressor correlations in residuals were all non-significant, e.g., the three DSSQ residual correlations ranged from 0.08 to 0.18.Comparable correlations for residuals for selected psychophysiological measures are provided in It was hypothesized that individuals high in AnTI meta-worry would be more likely to show maladaptive responses with increasing state worry, relative to those low in meta-worry. Given the theoretical rationale for meta-worry being more likely to influence stress response to negative feedback than to cognitive demand, along with the preceding analyses, this hypothesis was tested only in the negative feedback condition, using a regression approach. Each performance and psychophysiological variable was treated as the dependent variable in turn.p < 0.05), though not the linear terms. The regression lines for individuals 1 SD above and below the mean are plotted in The dependent variable was predicted from linear terms for AnTI meta-worry and DSSQ state worry in the negative feedback condition, and the centered product term representing the interaction. In the analyses of performance, there were no significant linear or interactive effects for the command ratio or search accuracy measures. However, for waypoints added, the interaction was significant , right fNIR rSO2 response , EEG beta , and EEG gamma . Linear terms were non-significant in all cases. The interactions for fNIR resemble those for waypoints added (p < 0.10 in both equations).For the physiological variables, the meta-worry \u00d7 state worry interaction was significant for the left hemisphere fNIR rSOts added , center.Traits for resilience predicted subjective and physiological responses to negative feedback and cognitive demand stressors in a multi-UAS control simulation. As expected, worry traits, including meta-worry, were generally associated with higher levels of situational stress, whereas hardiness and grit appeared protective. The data also revealed more subtle relationships between traits and stress outcomes. As predicted, the AnTI was predictive of stressor reactivity primarily in the negative feedback condition, consistent with cognitive-attentional theory . The modBoth stressors elicited higher state distress, as expected, but the effect was larger for cognitive demand. High workload plays a major role in provoking the subjective distress response in task performance contexts, as the person appraises the task as uncontrollable and utilizes multiple forms of coping to manage overload . ContrarResponses to stressors were less differentiated at the physiological level, with both eliciting increased power in high-frequency EEG bands. Both stressors also elevated HRV, a somewhat unexpected finding given that increased workload typically reduces this index. Phasic HRV increases may reflect emotion-regulation and successful engagement of cognitive inhibitory processes . In the Overall, the findings suggest that both manipulations induced substantial subjective stress, but not the classical sympathetic arousal response, given that there was no stressor effect on mean heart rate. Instead, the marked increase in high-frequency EEG power suggests a more \u201ccognitive\u201d expression of stress that may reflect performance concerns and, as suggested by the HRV responses , effortsPrevious studies found that trait worry predicts a range of stress outcomes e.g., , 2010. ATotal scores on the hardiness and grit scales were both negatively associated with AnTI meta-worry. We cannot make casual inferences from cross-sectional data, but these associations are at least compatible with a role for dysfunctional metacognitions in undermining resilience. Hardiness and grit both support persistence in the face of adversity through active coping with obstacles to personal goals . EffectiThe study tested whether traits were associated with reactivity to the two stressors, over and above any general tendency toward higher levels of stress. Reactivity to stressors was assessed using residualized measures capturing the unique response to the stressor concerned. Consistent with the TSO framework , cross-sHardiness correlated with reactivity to both stressors, but it was generally more predictive of response to negative feedback than to cognitive demand. Total hardiness was associated with attenuated distress and worry responses to the feedback manipulation, the study stressor more likely to promote self-evaluation. Hardiness is associated with styles of appraisal and coping that are adaptive in a performance setting and are Grit predicted higher task engagement and lower distress under high demand; the motivational qualities associated with grit may be especially important under these circumstances. Task engagement is associated both with intrinsic motivation and striving for performance excellence . The rolThe traits were more weakly associated with physiological measures of stressor reactivity than with the subjective ones, but the negative feedback EEG data were notable for the consistent set of associations between higher AnTI scores and lower theta and higher gamma response. Theta and gamma may be functionally inter-related, based on evidence for cross-phase coupling . Both frResults thus far discussed suggest AnTI trait worry showed a distinctive pattern of associations with stress outcomes including generally higher state worry along with a more specific subjective and EEG response to negative feedback that may indicate poor emotion-regulation. However, these findings do not indicate a specific adaptive role for metacognition, i.e., meta-worry. The final set of analyses aimed to investigate the role of meta-worry in maladaptive stress outcomes by testing whether it moderated objective correlates of state worry.A moderator effect of meta-worry was found for the number of waypoints used, but not for the two overall performance measures. Behaviorally, in high meta-worry persons, state worry appeared to reduce task-directed effort, i.e., setting simpler paths to avoid hazards. By contrast, those low in meta-worry seemed to try harder as they become more worried. For these individuals, the worry state may be adaptive in motivating adaptive and coping task effort, blocking development of the CAS . HoweverThe physiological findings are consistent with this explanation. fNIR measures are indicative of task workload . On thisMore generally, the findings suggest a re-evaluation of the functional significance of worry in performance environments. Typically, worry is seen as a detrimental influence, as in classic studies of cognitive interference and test anxiety . HoweverThe current study identifies metacognition as a critical determinant of the consequences of worry. A somewhat comparable moderator effect was obtained by The present findings support the central proposition of S-REF theory that meta-cognitive dysfunction is a major driver of worry states . MaladapThe current study used a student sample asked to perform a complex task simulation following a relatively short training and practice period. Generalization of findings to samples of expert UAS operators is thus questionable. Greater skill and experience may attenuate stress response , but theThere are also issues related to stress assessment. To keep the data analysis tractable, we calculated responses averaged across each task condition, but there may have been considerable variation in stress within each condition. Further research might test the role of metacognitive style in response to discrete, high-stress events. The experiment was also not designed to investigate dynamic stress processes, such as changes in coping strategy within experimental conditions. The study exemplifies a multivariate assessment approach that specifies a profile of subjective and objective stress response across multiple measures . The difThe current study confirms that traits for worry, hardiness and grit predict stress response in a complex multi-UAS control environment. Findings support the central tenet of the TSO framework that resFrom an applied standpoint, the data support multifactorial assessments of populations required to perform complex or otherwise stressful tasks, including military populations. The various stressors prevalent in the UAS environment may elicProfiling strengths and vulnerabilities may also allow training to be tailored to the individual to optimize resilience. For example, This study was carried out in accordance with the recommendations of the University of Central Florida Internal Review Board; with written informed consent from all subjects. All subjects gave written informed consent in accordance with the Declaration of Helsinki. The protocol was approved by the University of Central Florida Internal Review Board.All authors listed have made a substantial, direct and intellectual contribution to the work, and approved it for publication.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Vicia faba, which has a yield highly dependent on pollinators. We hypothesise that conical cells may have been lost in some genotypes as a consequence of selective sweeps or genetic drift during breeding programmes. We find that 13% of our lines have a distribution of conical petal epidermal cells that deviates from that normally seen in V. faba flowers. These abnormal phenotypes were specific to the ad/abaxial side of petals, suggesting that these changes are the result of altered gene expression patterns rather than loss of gene function.At a microscopic scale, the shape and fine cell relief of the petal epidermal cells of a flower play a key role in its interaction with pollinators. In particular, conical shaped petal epidermal cells have been shown to have an important function in providing grip on the surface of bee-pollinated flowers and can influence bee visitation rates. Previous studies have explored interspecific variation in this trait within genera and families, but naturally-occurring intraspecific variation has not yet been comprehensively studied. Here, we investigate petal epidermal cell morphology in 32 genotypes of the crop This has led to a magnificent array of flower colours, scents and shapes which maximise the reproductive fitness of species with diverse pollinators . One traOne particular cell morphology that influences the interaction of a flower with its pollinators is the presence of conical petal epidermal cells. These cone-shaped cells are found on the petals of 75\u201380% of angiosperms analysed . Bees haBilaterally symmetrical flowers such as those found in most legumes are particularly interesting when investigating the function of petal epidermal cell morphology because of the specific way pollinators interact with these petals. Fabaceae flowers are generally organised into three petal types: the dorsal standard, lateral wing and ventral keel petals. The wing and keel petals are joined at their base by petal folds. During a legitimate visit, a bee alights on the wing petals and pushes downwards on the wing petals to allow access to the nectar at the base of the flower and pollen contained on the anthers and within the keel petals . The staA large-scale analysis of flower epidermal cell morphology in the Fabaceae identified six main categories of cell types based onLotus spp. L. (Fabaceae), Anagyris latifolia Brouss. ex Willd. (Fabaceae), Navaea phoenicea Webb & Berthel. , Isoplexis spp. (Lindl.) Loudon (Plantaginaceae), and Canarina canariensis (L.) Vatke (Campanulaceae)) the transition is associated with the loss of conical cells portion of cells for 32 genetically distinct lines of V. faba and asked (i) which cell types are present within V. faba flowers? and (ii) is there variation in the distribution of conical petal epidermal cells between genetically distinct lines of V. faba?Crop plants present an ideal opportunity to explore the presence of intraspecific variation in petal epidermal morphology, because many independent genotypes are retained in stock centres for commercial breeding. Crops such as the field bean um yield . Howeverum yield . Previoual cells . We were22.1V. faba we randomly selected 32 lines from the seed collections at the National Institute of Agricultural Botany (Sources in Table S1). These lines had been self-pollinated for at least 5 generations and therefore should be homozygous at the majority of loci. The majority of the lines were white with black wing-petal spots, as is typical for field bean flowers. However, lines NV175, NV643, NV644, NV676 and NV868 lack wing-petal spots, and are pure white. Line NV706 had a crimson flower with dark wing-petal spots. Vouchers for specimens of a plant from each line used in the study were deposited in the University of Cambridge herbarium , with the voucher numbers CGE33556 \u2013 CGE33587 (Table S1).To determine the level of variation in epidermal cell morphology within 2.2For each line, one flower was analysed to represent that genotype, as petal epidermal cell type has never been shown to be influenced by environment. The pollinator-interacting wing and standard petals were imaged for all 32 lines, focusing on the distribution of conical cells in these petals. For a subset of five of these lines a more in depth analysis of the cell types present was undertaken, including of the keel petals.2.3Dental wax casts of fresh fully open flowers were made for both the adaxial and abaxial surface of all petals of interest by pressing each petal into freshly mixed wax then peeling the petal away once the wax was set. This method preserves the native structure of the petal surface and reduces the risk of introducing artefacts compared to tissue preparation processes that use dehydration. From these, epoxy-resin replicas were produced using 2 Ton Epoxy (DevCon), and sputter coated with gold or platinum using a Quorum K756X sputter coater. Surface replicas were examined using a FEI Philips XL30 Scanning Electron Microscope. Petals were surveyed for the absence or presence of conical cells at the apical part of the cell and the distribution of the cell types noted.2.4Epidermal cell types were classified into discrete categories following 33.1V. faba \u2013 standard, wing and keel petals \u2013 had specific categories of cell types associated with them so that they could be discriminated on the basis of their epidermal cell morphology due to their known potential to affect bee preference (V. faba genotypes (Table S1). However, the two other \u2018non-spotted\u2019 lines surveyed, NV643 and NV644, did not have abnormal distributions of conical cells.As the aim of this study was to identify intraspecific variation in traits which may affect the pollination of eference . Of the eference , the majeference ; Fig. 3.eference and 4 . During our survey of variation in conical cell production across different genotypes, we also noted more subtle differences in cell shape between different lines . In some lines, such as NV706, typical conical cells were interspersed with more rounded cells that lacked striations. Furthermore, the ratio between cell height and width appears to vary widely, even within a flower, such as seen on the wing petals of NV673, where the conical cells appear to have roughly a height to width ratio close to one on the adaxial surface but much greater than one on the abaxial surface.4V. faba. During a detailed study of five genotypes, we built on the previous examination of V. faba floral epidermal cell morphology by Petal cell epidermal morphology has an important function in mediating the interaction of a flower with pollinators as well as potential antagonists such as nectar robbers . In thisEchium wildpretii with different functional groups of pollinators (Lotus japonicus (Regel) K.Larsen, and Trifolium repens L. had a substantially different distribution of conical cells. This level of variation is lower than that seen at the intrageneric level: previous studies report an average of \u223c 40% of petal epidermal phenotypes deviating from the most common phenotype within a genus D.D.Sokoloff), such that they are lost from the standard petal and located only towards the base of the abaxial wing petal surface, are associated with changes in the timing in expression of CYC2 petal specific, rather than loss or gain of function across the entire flower. This suggests that the genetic basis of these changes lies within alteration of gene expression patterns rather than loss or gain of protein function. Conical cell development is regulated by members of the Subgroup 9 R2R3 MYB transcription factor family, containing ke genes . In Thalta-like2 . Further of CYC2 , 2017. Id) petal . In the V. faba, pollinators tend to contact the abaxial surface of the wing petal and occasionally the adaxial surface of the standard petal when foraging legitimately on the flower , where they were the major cell type .Data were collected by E.J.B. Both E.J.B and B.J.G. conceived the study and drafted the manuscript. Both authors gave final approval for publication.The authors declare no competing financial interests."} +{"text": "It is very rarely encountered during appendectomy. The aim of this paper is to report a case of acute appendicitis caused by Enterobius vermicularis. A 23-year-old housewife presented with a right lower abdominal pain for the past 8\u202fh. Clinical examination revealed right iliac fossa tenderness upon palpation and rebound tenderness upon release. The patient was diagnosed as a case of suspected acute appendicitis. The patient was anesthetized and intubated. Delivery of the vermiform appendix done through right grid iron incision. Intra operatively an inflamed appendix obstructed by Enterobius vermicularis was noted.Enterobius vermicularis can habit the appendix and induce the signs and symptoms of A.A with or without actual histopathological acute appendicitis. The treatment of choice is surgical resection of the appendix. Regarding appendiceal helminthes, recent literature concentrate mainly on the pathological changes that caused by the presence of intraluminal parasites [Enterobius vermicularis and go over the literature briefly. The case has been reported in line with SCARE guideline [Acute appendicitis (AA) represents one of the commonest causes for emergency operations worldwide with a cumulative lifetime incidence rate of 9.0%, accounting to a significant portion of intraabdominal conditions. During the 20th century, the disease was mostly reported within the western countries, however a rise in its incidence has been noted within newly industrialized countries in 21 century . Parasit century . However pinworm . Althougarasites . In thisuideline .1.1A 23-year-old housewife patient presented to Emergency Department with a right lower abdominal pain for the past 8\u202fh with concomitant anorexia, nausea and vomiting twice. Other than having a mild fever, the patient had normal vital signs.1.2Clinical examination revealed right iliac fossa (RIF) tenderness upon palpation and rebound tenderness upon release. Other signs like Rovsing and pointing signs were positive as well.1.33) while other routine investigations like urinalysis, blood urea and serum creatinine were not remarkable. Abdominal and pelvic ultrasound reported no unusual findings and pregnancy was excluded through blood test. The patient was diagnosed as a case of suspected acute appendicitis (S.A.A.).Complete blood count revealed mild leukocytosis and then was taken into the operation theatre. Pre-operative prophylactic antibiotic given . The patient was anesthetized with General Anesthesia (GA) and intubated. Delivery of the vermiform appendix done through right grid iron incision. Intra operatively an inflamed appendix obstructed by as noted , Fig. 2.1.5The patient recovered without any complications and was transferred to the surgical ward for observation. Within 8\u202fh, she passed flatus and started oral feeding. After 24\u202fh, she was sent home in a good heath condition and scheduled to visit after 8 days, where she was healthy and the would stiches removed.2Enterobius vermicularis is known by many names and first description of human infestation nearly dates back 10,000 years. However, it was Fabrius in 1634 who first described involvement of the worm in appendicitis. Once E. vermicularis reaches maturity, it stays and reproduces in terminal ileum, caecum, appendix and ascending colon. The lifecycle of the male worm ends after fertilization and dies, while the female must migrate to the anal canal to lay eggs [The lifespan of Enterobius vermicularis (pinworm) is between 2 and 5 weeks(panidis). Despite that the relationship between E. vermicularis and pathogenesis of appendicitis had been studied for many years, the influence of the parasite to induce inflammation is still unclear. Although E. vermicularis(pinworm) may have a role in causing appendiceal disconfort or appendiceal chronic inflammation due to obstruction, the majority of cases have no acute inflammation [Enterobius vermicularis treatment. It is an agent that blocks neuromuscular depolarization making the worm undergo spastic paralysis through continuous nicotinic activation, ultimately the worm detaches from the host and consequently will be expelled through defecation [Enterobius vermicularis.lay eggs . The lifammation . The belammation . None thammation . Once thammation . Non-Opeammation . Appendefecation . Table 1Enterobius vermicularis can habit the appendix and induce the signs and symptoms of A.A with or without actual histopathological acute appendicitis. The treatment of choice is surgical resection of the appendix.In composite, No source to be stated.Approval has been taken from Kscien centre.Consent has been taken from the patient and the family of the patient.Zuhair D. Hammood and Abdulwahid M. Salih: Surgeon performed the operation and follow up.Shvan H. Mohammed, Fahmi H. Kakamad, and Karzan M. salih: Writing the manuscript and follow up.Diyar A. Omar, Marwan N. Hassan, Shadi H. Sidiq, Mohammed Q. Mustafa, Imad J. Habibullah, and Drood C. Usf: literature review, final approval of the manuscript.Not applicableFahmi Hussein kakamad.Not commissioned, externally peer-reviewed.There is no conflict to be declared."} +{"text": "In this short paper we explain the reasons why preventing and treating lameness in farmed animals can and should be considered a legal requirement under European Union (EU) animal welfare law. We also briefly present the situation in different farming sectors. We make the case that, in order to comply with current EU farmed animal welfare law, lameness prevalence and severity should be regularly monitored on farm, and species-specific alarm thresholds should be used to trigger corrective actions.Lameness is the clinical manifestation of a range of painful locomotory conditions affecting many species of farmed animals. Although these conditions have serious consequences for animal welfare, productivity, and longevity, the prevention and treatment of lameness continue to receive insufficient attention in most farming sectors across the European Union (EU). In this paper, we outline the legislative framework that regulates the handling of lameness and other painful conditions in farmed animals in the EU. We briefly outline the current situation in different livestock farming sectors. Finally, we make the case for the introduction of regular on-farm monitoring of lameness and for the setting of alarm thresholds that should trigger corrective actions. Whilst there are no species-specific EU welfare Directives for dairy cows, beef cattle, or small ruminants, this does not mean that these animals have no legal protection. They are covered by Directive 98/58 on the protection of animals kept for farming purposes . Article 3 provides that farmers must \u201ctake all reasonable steps to ensure the welfare of animals under their care and to ensure that those animals are not caused any unnecessary pain, suffering or injury\u201d. Although written in broad terms, this is a legally demanding provision requiring the taking of \u201call\u201d reasonable steps to \u201censure\u201d welfare and the avoidance of unnecessary pain, suffering, or injury.The European Commission has said\u2014correctly, in our view\u2014that to interpret this provision, one must look at the science and in particular at the reports produced by the European Food Safety Authority (EFSA) . The sciPigs and chickens are covered by species-specific Directives, but the General Farm Animals Directive also applies to them. Accordingly, pig and chicken farmers must also take all reasonable steps to achieve low levels of lameness.Addressing lameness is important from a legal perspective as its underlying conditions are painful and therefore require treatment. Additionally, lameness has a direct economic impact on farmers\u2019 income as it compromises productive performances and can substantially decrease animal longevity.Lameness is widely recognized as one of the major welfare problems for dairy cows as regards prevalence, duration and magnitude of adverse effect ,3. The EFattening pigs and breeding sows are frequently affected by lameness and other locomotory disorders ,13. The Sheep farming systems vary from extensive to intensive (or mixed). Lameness occurs in all systems and has been classified by EFSA among the three major animal welfare challenges for sheep . The curDue to genetic selection for fast growth, the broiler chickens used in today\u2019s intensive sector reach their slaughter weight three to four times as quickly as in the 1950s . Owing tIn parts of the EU, beef cattle farming is characterized by indoor confinement on hard flooring during the fattening phase, with energy-dense diets that are high in maize. Coupled with the genetic selection for rapid weight gain, these conditions predispose the animals to develop metabolic and joint disorders that can result in lameness , sometimDue to several concurring factors, lameness has become an important production disease in livestock farming, affecting many terrestrial farmed animal species. If left untreated, lameness can compromise animal health and welfare and can even lead to premature culling or death of the animals. To comply with EU law on the protection of animals kept for farming purposes, farmers must take \u201call reasonable steps to ensure the welfare of animals under their care and to ensure that those animals are not caused any unnecessary pain, suffering or injury\u201d. The taking of effective steps to prevent lameness and the timely treatment of animals presenting signs of lameness are certainly part of this legal obligation.bonus-malus system whereby a prevalence of lameness above a certain threshold (\u226510%) is penalized and a low prevalence (\u22642%) is given economic incentives [Various strategies could be foreseen to encourage the prevention and treatment of lameness. Validated locomotion scoring systems have been developed for all the main farmed species with guicentives . Plans tcentives ,35, and With a view to stimulating a more proactive attitude towards preventing and treating lameness, the industry and Member States should adopt a range of strategies tailored to the different farming sectors. The European Commission should produce a formal Recommendation (as they have done for the prevention of routine tail docking in pigs ) advisin"} +{"text": "Currently, bladder cancer (BC) represents a challenging problem in the field of Oncology. The high incidence, prevalence, and progression of BC have led to the exploration of new avenues in its management, in particular in advanced metastatic stages. The recent inclusion of immune checkpoint blockade inhibitors as a therapeutic option for BC represents an unprecedented advance in BC management. However, although some patients show durable responses, the fraction of patients showing benefit is still limited. Notwithstanding, cell-based therapies, initially developed for the management of hematological cancers by infusing immune or trained immune cells or after the engineering of chimeric antigen receptor (CAR) expressing cells, are promising tools to control, or even cure, solid tumors. In this review, we summarize recent cell-based immunotherapy studies, with a special focus on BC. Bladder cancer (BC) is the fourth-leading type of cancer for estimated new cases in males in the U.S. in 2021 and eighth in estimated death cases [Bacillus Calmette\u2013Guerin (BCG) after transurethral resection. This treatment, which has become the gold standard for NMIBC since the 1970s, produces a local inflammatory response, mainly driven by the innate immune system, which prevents recurrences and progression of NMIBC. Although NMIBC shows a favorable prognosis, it also displays one of the highest incidences of recurrence (60\u201370%) and, in some cases, progression into muscle-invasive disease [BC therapies for advanced BC remained unchanged for several decades until the recent arrival of immune checkpoint inhibitors (ICI), which represent an unprecedented advance in the management of this type of cancer. Indeed, the use of immune reactivation is not new in BC, as in high-risk non-muscle-invasive bladder cancer (NMIBC) patients, the treatment includes intravesical instillation with disease . These N disease . For hig disease . Europea disease . Recentlcurrence 0\u201370% and disease . In plat disease , though disease .In this review, we summarize immunotherapy studies carried out in BC. Since most of the immunotherapy treatments for BC patients are non-cell-based, we review those widely used therapies. However, taking into account that non-cell-based immunotherapies fail in some patients, cell-based immunotherapies are being developed as an alternative for the treatment of those BC patients. We discuss immunotherapy using innate and adaptive immune cells with a special focus on engineering chimeric antigen receptor (CAR)-T lymphocytes (T cells) and their improvement as a tool to cure BC.Deep knowledge of the immune system and its role in fighting cancer is essential for the development of cancer immunotherapies. Different non-cell-based immunotherapies have been tested, such as cytokines, immune-modulating drugs, vaccines, and antibodies . Here, wICI are monoclonal antibodies that block immune checkpoint proteins, which prevent the immune evasion of cancer cells . In partOther non-cell-based immunotherapy strategies are being developed with promising results. For example, T cell-engaging bispecific antibodies (BiAbs). BiAbs bind to the tumor cell via tumor-associated antigen (TAA) and also to the T cell receptor CD3 subunit, inducing T cell recruitment and target cell killing . As BiAbIn recent years, the optimization of technologies allowing efficient immune cell enrichment and expansion in vitro has been essential to deliver these cellular products into patients and apply cell-based immunotherapies in the clinic.SLC) and human interleukin-2 (IL-2) genes [Cell-based immunotherapies can target innate or adaptive immune cells. Naturally, immature dendritic cells (DCs) are able to take up exogenous antigens, migrate to lymph nodes, mature and present those antigens to T cells together with co-stimulatory signals, which triggers T cell activation. The most common clinical treatment using DCs in urological malignancies is ex vivo antigenic peptide loading followed by autologous infusion. In 2001, autologous DCs pulsed with tumor antigen melanoma-associated antigen 3 (MAGE-3), commonly expressed in advanced BC, was synthesized to bind specifically to HLA-A24. These loaded DCs generated tumor-specific cytotoxic T lymphocyte (CTLs) response against a MAGE-3-expressing bladder cancer cell line . An opti2) genes . Autocri2) genes . Autolog2) genes . Importa2) genes , peripheNatural killer (NK) cells are innate immune cells able to directly recognize and kill tumor cells. When the equilibrium between positive and negative signals is disrupted in a tumor cell as a consequence of NK activating ligand upregulation and loss of inhibitory signals, NK cells induce tumor cell lysis by granzymes and perforins or via apoptosis induction . AutologAs previously stated, BCG immunotherapy is the most common treatment in NMIBC patients, and the immune response associated with its response has been well studied. Although BCG produces an anti-tumor environment affecting the innate immune system, it has been reported that BCG instillations in NMIBC patients also induce immune anti-tumor responses mediated by CD4+ T cells and CD8+ cytotoxic T lymphocytes. This suggests a key role of T cells in BC anti-tumor defense, leading to the exploration of possible ICI and BCG combinations or the use of ICI after BCG failure or CARs . T cell TCRs are natural receptors for antigen recognition presented via major histocompatibility complex (MHC) molecules on APCs. TCRs known to be reactive against a specific tumor antigen are usually obtained from TILs and cloned into T cells . TCRs coCARs are designed to recognize a specific tumor antigen by their extracellular domain composed of a monoclonal antibody-derived scFvs . A transRecently, CAR technology has been developed and applied in patients with hematological cancers with high rates of total remission . HoweverCAR-T cell therapy has been tested in few clinical trials to treat urological malignancies . Metastaclinicaltrials.gov accessed on 10 March 2021). A phase I/II study of the treatment of metastatic cancer that expresses MAGE-A3, including BC, using lymphodepleting conditioning followed by the infusion of anti-MAGE-A3 HLA-A*01 restricted TCR-gene engineered lymphocytes, and aldesleukin was not concluded due to insufficient accrual; therefore, no statistical results could be concluded (NCT02153905). A phase I/II study using fourth-generation CAR-T (4SCAR-T) cell therapies in advanced or metastatic urothelial BC patients who had no further treatment available is now in recruiting stage (NCT03185468). Fourth-generation CAR-T cells anti-PSMA or anti-Fos-related antigen (FRA) were evaluated in terms of side effects and effective doses in treating refractory and recurrent solid tumors. Another phase I study in recruiting stage was based on the combination of HER2-specific autologous CAR-T cell treatment with the injection of CAdVEC oncolytic adenovirus that was designed to help immune tumor response (NCT03740256). Finally, an active clinical study to evaluate CCT301-59 CAR-T cell therapy in patients with recurrent or refractory solid tumors, including BC, on the basis of safety, tolerability, and anti-tumor activity was started (NCT03960060). At the moment, there are several clinical studies in progress to test the efficacy and security of T cell therapy, alone or in combination with other therapies, in several solid tumors such as BC cells have recently drawn interest as innovative cellular cancer immunotherapies. Human \u03b3\u03b4 T cells have two main advantages for their anti-tumor use . First, NKT cells are a mixed population of NK and T cells that co-express an \u03b1\u03b2 T cell receptor in addition to cell surface markers of NK cells such as NK1.1, CD16, and CD56 . NKT celAlthough CAR-NK and CAR-Macrophages (CAR-Ms) have not been used in the treatment of genitourinary cancer yet, both CAR-based cell therapies have shown remarkable results in other cancer types. The safety of CAR-NK cells was higher as compared with CAR-T cells, due in part to their limited lifespan in circulation and the fact that cytokines released by NK cells are not highly associated with CRS . MoreoveXenogeneic cell-based therapy consists of the implantation or infusion into human body fluids, tissues, or organs of viable somatic cell preparations of non-human animal cells as was defined by the European Medicines Agency (EMEA) in 2009. A common non-human animal cell source for xenogeneic cell therapy is pigs which are used to restore lost physiological tissue function and repair wounds caused by cancer . NaturalThe major drawback of xenotransplantation is the immunological rejection of the organ, tissue, or cell grafts . This prAs we have described in this review, several promising options for cell-based therapies of BC have been developed. Although most of the studies were performed using T cells and specifically CAR-T cells, other alternative treatments using innate immune cells such as DC or NK cells in pre-clinical models or even in patients were tested. CAR-T cell therapy, which is very effective in the treatment of blood cancers, showed different safety and efficacy drawbacks in solid tumors and in bladder cancer. The improvement of fourth-generation CAR-T cells and their use in combination with other treatments such as ICI, cytokines, or neoantigen-based vaccines will hopefully improve therapy response in BC patients in the future."} +{"text": "In Malaysia, breast cancer is the most common cancer among women. As such, early diagnosis and screening practices are important to increase the survival rate. Breast self-examination (BSE) is one of the main screening methods for breast cancer. Socio-demographic characteristics and knowledge of breast cancer are amongst the crucial roles in determining women's behavioral adoption in performing BSE. This study aims to assess the relationship of socio-demographic factors and knowledge of breast cancer on the stage of behavioral adoption of BSE among Malaysian women in Kuantan, Pahang.A cross-sectional study was conducted on 520 women from three different government health clinics in Kuantan and IIUM Family Health Clinic from February to April 2018. Data were collected using a self-administered questionnaire on socio-demographic factors and knowledge of breast cancer and its effect on the behavioral adoption of BSE.Significant difference was found between socio-demographic characteristics and behavioral adoption of BSE. However, only breast screening and the best time for screening were found to be significant with the behavioral adoption of BSE and knowledge of breast cancer.It is found that most women in Kuantan, Pahang perform BSE but were still unaware of the importance of performing BSE for early breast cancer detection. This study was expected to enhance women's awareness of the benefits of performing BSE. Based on the calculation, a sample size of 520 respondents was required and criteria of the respondents are as in The study was conducted from February to April 2018 amongst 520 Malaysian women living in Kuantan, Pahang aged 35 to 70 years old. A multistage, cluster-stratified random sampling method was carried out to obtain the appropriate sample size. In the first stage, a cluster sampling method was implemented by randomly selecting three sub-districts in Kuantan, Pahang. Following that, a stratified sampling method was carried out to randomly select the polyclinic in each sub-district. Thus, Klinik Kesihatan Beserah from Beserah and Klinik Kesihatan Balok from Sungai Karang were selected. However, two polyclinics; Klinik Kesihatan Kuantan and IIUM Family Health Clinic were randomly selected from Kuala Kuantan since the region was larger and had more residents compared to Beserah and Sungai Karang. The sample size was calculated using a single proportion formula based on the assumption of 5% type 1 error 2.3.The respondents were briefed about the study beforehand and willingness to fill the questionnaire was considered as consent to participate in the study. The respondents were also notified that their participation in the study was voluntary and they can withdraw from the study at any time if they do not want to be a part of the study.Ethical principles were followed throughout the study and ethical approval was acquired from the Kulliyyah Postgraduate Research Committee (KPGRC) , followed by IIUM Research Ethics Committee (IREC) and Medical Research and Ethics Committee (MREC) .2.4.The questionnaire was constructed based on a review from previous research literature of BSE, stage of behavioral adoption of BSE and knowledge of BSE. Five health professional experts including two professors, a radiologist specializing in diagnosis and screening of breast cancer, an English lecturer and a research scholar in women's health were involved in validating the content of the questionnaire. The self-administered questionnaire comprised of three sections. Section one covers socio-demographic characteristics . Section two comprised of 26 questions that measured respondents' knowledge of breast cancer. It consisted of seven questions on the symptoms of breast cancer, seven questions on the risk factors of breast cancer, seven questions on the method of breast screening, three questions on the best time for breast screening and two questions on the perceptions of breast lump. A dichotomous type questionnaire was used in this section to elicit the respondent's knowledge of breast cancer. Each question answered correctly was given a score of 1 and the question answered incorrectly or no answer was given a score of 0. Section three consists of questions pertaining to the stage of behavioral adoption of BSE .2.5.Prior to the full-scale research, a pilot study was carried out on 103 respondents who were randomly selected. The steps taken in the pilot study included content validation and translation to maintain and ensure the overall accuracy of the questionnaire. Additionally, exploratory factor analysis (EFA) was used to explore the construct validity of the questionnaire. Kaiser-Meyer-Olkin (KMO) and Bartlett's Test of Sphericity was also used to measure the adequacy of each item in the questionnaire. During several steps of EFA, few factors were fix and problematic items were eliminated one by one as they failed to meet the minimum criteria of having factor loading \u2265 0.40 or cross-loading on other factors. Findings from the EFA revealed nine factors that jointly accounted for 74.2% of the observed variance. All nine factors have good internal consistency with Cronbach's alpha \u2265 0.8. The questionnaire also showed good convergent and discriminant validity 2.6.All data were analyzed using SPSS version 21.0 . Descriptive statistics and chi-square were used to assess the relationship between the stage of behavioral adoption of BSE and socio-demographic variables. Multinomial logistic regression was performed to examine the relationship between the stage of behavioral adoption of BSE and knowledge of breast cancer.3.3.1.The socio-demographic characteristics of the respondents are as in 3.2.There were 26 questions on knowledge of breast cancer and most of the respondents were able to answer more than 13 questions correctly (59.0%). From Most of the respondents indicated that BSE (91.7%), CBE (85.4%), mammography (74.8%), ultrasound (55.4%) and MRI (52.1%) are some of the breast screening methods. Additionally, 55.2% of the respondents indicated that a Pap smear test was not a method of breast cancer screening. The respondents were asked on the best time of breast screening in which the correct answer was a week after menstruation and most of them (54.6%) answered correctly. Most of the respondents answered wrongly for presence of an abnormal lump in the breast means cancer (45.4%) and pain in the breast lump is cancer (58.8%).3.3.3.4.A Chi-square table was carried out to determine the relationship between socio-demographic characteristics and stage of behavioral adoption of BSE . A signi3.5.p < 0.05). With reference to the relapse stage, the method of breast screening was found to be significant with the pre-contemplation stage and the determination stage . Further, the best time of screening was found to be significant with contemplation , determination , action and maintenance . No significant differences were indicated between the risk of breast cancer, symptoms of breast cancer and perception of breast lump with the stage of behavioral adoption of BSE. Multinomial logistic regression was used to determine the relationship between constructs of knowledge on breast cancer and stage of behavioral adoption of BSE. The outcome occurrence likelihood was determined using the odds ratio (OR) at a 95% confidence interval. From the model fitting information in 4.This study is to determine the relationship between socio-demographic characteristics and knowledge of breast cancer with the stage of behavioral adoption of BSE.4.1.A statistical significance was found between constructs of socio-demographic characteristics and stage of behavioral adoption of BSE. This relates to previous studies whereby an increase in age will reduce the performance of BSE The current study reflected a positive association between the level of education and stage of behavioral adoption of BSE. This indicated that the level of education influences the performance of BSE as women with high education tends to be able to obtain information on breast cancer by themselves. In the process of obtaining information, they became more aware of the benefits of early breast cancer detection 4.2.Findings of the study indicated that breast health knowledge is still insufficient amongst the women in Kuantan, Pahang. The lack of knowledge of breast cancer and BSE could be due to insufficient source of knowledge from the media such as newspapers, magazines In general, women were found to be more likely to be in the relapse stage compared to their current stage when they did not trust their technique in performing BSE The findings of this study could help in the creation of interventions tailored to encourage women to progress towards the maintenance stage of BSE behavioral adoption. Further, understanding the contribution of women's socio-demographic characteristics and knowledge on the behavioral adoption of BSE can lead to risk reduction from relapse of behavioral adoption. This is vital for the success of screening programs, clinical care and policy development as well as to design community education programs to detect breast cancer early. The findings of the study may provide a baseline assessment for future intervention programs to promote early detection and management of breast cancer.5.This study has several limitations. The study is limited to women in Kuantan, Pahang to elicit the association of socio-demographic characteristics and knowledge of breast cancer on their stage of behavioral adoption of BSE. As such, the data cannot be generalized to Malaysian women. Further, as this study is a quantitative study, aspects such as feelings and actions of the respondents cannot be known to provide depth and detail pertaining to their attitude, feeling and behavior. Additionally, as this study is a quantitative study, it may not have captured the entire range of knowledge of breast cancer, practices and experiences of breast screening due to the invariability of racial dispersion of respondents. Some of the responses may have been biased particularly for those who completed their survey in the presence of researchers. As the questionnaire focuses on breast screening practices as positive behavior, it is possible the respondents gave more socially desirable answers. Lastly, the researcher did not confirm whether respondents knew the correct method to perform BSE even though they indicated that they perform regular BSE.Click here for additional data file."} +{"text": "Staphylococcusaureus. Thus, alternatives to conventional antibiotic therapy are needed. Recently, mesenchymal stem cells have been shown to have antimicrobial properties. This study aimed to evaluate the antimicrobial activity and therapeutic effect of local treatment with antibiotic-loaded adipose-derived stem cells (ADSCs) plus an antibiotic in a rat implant-associated infection model. Liquid chromatography/tandem mass spectrometry revealed that ADSCs cultured in the presence of ciprofloxacin for 24\u00a0h showed time-dependent antibiotic loading. Next, we studied the therapeutic effects of ADSCs and ciprofloxacin alone or in combination in an implant-related infection rat model. The therapeutic effects of ADSCs plus antibiotics, antibiotics, and ADSCs were compared with no treatment as a control. Rats treated with ADSCs plus ciprofloxacin had the lowest modified osteomyelitis scores, abscess formation, and bacterial burden on the implant among all groups (P\u2009<\u20090.05). Thus, local treatment with ADSCs plus an antibiotic has an antimicrobial effect in implant-related infection and decrease abscess formation. Thus, our findings indicate that local administration of ADSCs with antibiotics represents a novel treatment strategy for implant-associated osteomyelitis.Implant-related infection is difficult to treat without extended antibiotic courses. However, the long-term use of antibiotics has led to the development of multidrug- and methicillin-resistant With the increase in arthroplasty procedures and the ongoing development of drug-resistant microorganisms, the incidence of such infections has been increasing1. To address this challenge, novel treatments are necessary1.Periprosthetic infections are a tremendous burden to patients and healthcare institutions worldwideStaphylococcusaureus (S.aureus) is one of the primary pathogens responsible for implant-associated osteomyelitis2. The ability of S.aureus to establish chronic, implant-associated infections and our inability to cure them are directly associated with its capacity to form biofilms, creating an environment where the bacteria can grow and persist while being protected from the patient\u2019s immune response and antibiotics3. At present, systemic administration of antibiotics is the standard therapy for implant-associated infections. However, the long-term use of antibiotics has led to the development of multidrug-resistant and methicillin-resistant S.aureus4. Strategies for local antibiotic delivery to increase the antimicrobial concentration at the site of infection while keeping systemic levels low to avoid potential side effects have been investigated for several decades5. However, there still is an unmet need for alternatives to conventional antibiotic therapy for the management of chronic infections4.10. MSCs reportedly participate in the innate immune response through the secretion of antimicrobial peptides7. Bone marrow-derived stem cells (BMSCs) can be loaded with antibiotics and other drugs, and MSCs including adipose-derived stem cells (ADSCs) co-administered with antibiotic therapy may be a novel effective, antimicrobial approach to the treatment of chronic, drug-resistant infections12. Among the various types of MSCs, ADSCs have numerous unique advantages. They are abundant in subcutaneous adipose tissues and can be easily harvested using a syringe or by minimally invasive lipoaspiration13. In addition, they contribute to the complex wound-repair processes, comprising inflammation, granulation, and remodelling15. While ADSCs are known to exert antibacterial activity, their activity in implant-related osteomyelitis has not been previously investigated. We hypothesized that ADSCs loaded with an antibiotic can exert an antimicrobial therapeutic effect in implant-related osteomyelitis. Therefore, we studied the effects of local treatment with ADSCs and ADSCs plus an antibiotic in a rat model of implant-associated osteomyelitis to evaluate their effectiveness in implant-related infection.Recently, mesenchymal stem cells (MSCs) have been shown to have antimicrobial properties50) values of 99.5\u00a0mg/L at 24\u00a0h and of 103.6\u00a0mg/L at 7\u00a0days PCR showed that antibiotic-loaded ADSCs had similar ALP mRNA levels, but reduced osteocalcin mRNA levels when compared to ADSCs was also examined to assess the antimicrobial peptide secretion ability. rCRAMP was expressed to similar levels in both antibiotic-loaded ADSCs and ADSCs dose-dependently suppressed the proliferation of ADSCs, with half-maximum inhibitory concentration , which induced infection in 100% of non-treated rats by day 7\u00a0day after surgery. Intra-rater reliability of the modified osteomyelitis score was assessed, and the intra-class coefficient was 0.902 . The no-treatment group showed obvious swelling at the surgical site, whereas rats in the antibiotic-loaded ADSCs plus CPFX (ADSCs-ant) and ADSCs groups showed very limited swelling . Only the antibiotic group showed significantly decreased bacterial burden on the plate when compared to the no-treatment group . ADSCs-ant induced a significant decrease in bacterial burden in the soft tissue as compared to that in no treatment . ADSCs-ant, antibiotic, and ADSCs significantly reduced total bacterial burden compared to that in no treatment .Bacterial infection was detected in all rats in the no-treatment group. Treatment with ADSCs-ant significantly suppressed the bacterial burden on the proximal screw compared to that in no treatment 20. However, the use of local injection of ADSCs loaded and combined with an antibiotic to treat implant-related osteomyelitis had not been reported to date. Multiple, complementary mechanisms of action (both direct and indirect) likely account for the ability of MSCs to help control infections, although it is not fully understood whether the main weapon is the cell itself or its secretome9. They might act indirectly through their role in the host immune response against pathogens, especially in the dynamic coordination of pro- and anti-inflammatory elements of the immune system23 or by increasing the activity of phagocytes26. They might act directly through the secretion of antimicrobial peptides and proteins30 and the expression of molecules, such as indoleamine 2,3-dioxygenase31 and interleukin-1732. Our in vitro experiments showed that both ADSCs and ADSCs-ant expressed the gene encoding the antimicrobial peptide cathelicidin at similar levels. This implied that the combination with an antibiotic did not suppress the expression of antimicrobial peptides.Owing to their differentiation plasticity, immunomodulatory properties, angiogenic modulation, and paracrine support33. Antimicrobial peptide-mediated cell killing occurs by disrupting membrane integrity, by inhibiting protein, DNA, or RNA synthesis, and by interacting with certain intracellular targets34. Importantly, antimicrobial peptides can be active against certain pathogens that are resistant to conventional antibiotics, such as multidrug-resistant bacteria20. Previous studies in mice reported that cathelicidin is one of the factors produced by systemic MSCs that significantly contributes to Staphylococcus killing9. Thus, ADSCs seem to express antimicrobial peptides. This is likely, at least in part, responsible for the reduction in bacterial burden on the implant.Antimicrobial peptides are evolutionarily conserved small effector molecules (10\u2013150 amino acids) found in organisms ranging from prokaryotes to humans36 makes this cell population a strong candidate for cell therapy in graft-versus-host disease or vascularized composite allotransplantation. MSCs have been shown to exert immunomodulatory effects through cell contact and paracrine effects38. This protective role of MSCs in the host reportedly is dual: on one hand, they can create an immunosuppressive environment, thus avoiding exacerbation of pathological symptoms, helping to heal tissue damage, and allowing the establishment of an immune-tolerant environment; on the other hand, however, excessive immune suppression as well as the sensitivity of MSCs to microbial infection can lead to the opposite effect, hampering the host\u2019s ability to fight the infection and, instead, encouraging the spread of microbial effectors9. Therefore, the immunomodulatory capacity of locally administered MSCs in infectious diseases is not fully understood. A previous study reported negative effects of MSCs on orthopaedic implant-associated bone infection39, which is in contrast to studies reporting a beneficial effect of intravenously administered MSCs on the development of sepsis through a reduction in systemic inflammation and increased bacterial killing and phagocytosis39. The authors reasoned that, although immune suppression may be beneficial in systemic infection with whole-body inflammation, local and chronic infections such as osteomyelitis may be promoted by a local immunosuppressive environment39. Although their infection model\u2014consisting of a bone defect contaminated with S.aureus and administration of bone marrow-derived MSCs\u2014was different from ours, our study revealed no negative effect of ADSCs on implant-related infection, and, in contrast, showed a positive effect of ADSCs-ant. Therefore, we conclude from the combined findings that locally injected MSCs may have an immunosuppressive capacity, but do not always promote an immunosuppressive environment, and ADSCs combined with an antibiotic are an effective option for local treatment.The capacity of MSCs to interact with the innate and adaptive immune responses to inhibit T-cell proliferation and upregulate regulatory T cells40, and ADSC therapy improved ischemia reperfusion injury not only by suppressing the inflammatory and immune responses, but also by enhancing paracrine effects41. A previous in-vitro study showed that BMSCs can uptake antibiotics12. Our results suggest that ADSCs can also uptake antibiotics, with the antibiotic concentration increasing over time. Whether CPFX was internalized in the cells or attached to the cell surface was not clarified in this study. However, a previous in-vitro study using confocal microscopy showed that the anti-cancer drug paclitaxel was internalized in MSCs via Golgi-derived vesicles42. Based on this finding, we considered that CPFX might be internalized in the ADSCs and subsequently released from the cells at the infection site. In situ drug injection probably has lower efficacy than drug-loaded MSCs because of rapid dilution of the drug. MSCs have the ability to migrate into inflammatory sites43. In vivo, MSCs have been observed to accumulate in the spleen as well as in wound areas following intravenous administration4. Therefore, ADSCs loaded and combined with antibiotics may improve the delivery of antibiotics to the infected area. While our results showed that ADSCs-ant had the strongest therapeutic effect in rats with implant-related infection, the additive or synergistic interaction between ADSCs and antibiotics was not elucidated in this study. Further studies are needed to determine whether there is synergistic interaction between ADSCs and CPFX, and what the optimum antibiotic, dose, and regimen are.Systemic ADSC-assisted antibiotics therapy offered an additional benefit by reducing acute urogenital organ damage in a rat model46, and an intravenous antibiotic injection is not suitable to this end47. Therefore, recently, direct local antibiotic injection has been highlighted as an option because it achieves high local antibiotic concentrations48. Furthermore, a recent study has shown that MSCs secrete cysteine proteases that destabilize methicillin-resistant S.aureus biofilms, thereby increasing the efficacy of antibiotics that were previously tolerated by biofilms49. Therefore, by using ADSCs or ADSCs-ant, the effect of local antibiotic treatment could be enhanced. However, systemic treatment is also useful as it is easier to be carried out as compared to the ease of performing a local treatment. Therefore, we are currently researching the effect of systemic ADSC treatment for implant-related infection.Systemic antibiotics alone cannot completely remove biofilms, and thus, surgical debridement is generally necessary for the treatment of implant-related infection. However, surgical debridement and revision implant have not always been successful. Achieving a high local antibiotic concentration around an infected implant is of major clinical importance, because bacteria protected by the biofilm require antibiotic concentrations that are orders of magnitude greater than the MIC required for killing the bacteria50. Allogeneic MSCs have been previously safely administered to humans for a number of conditions, and their use as a treatment for chronic infections would not pose a unique risk50. One limitation of using ADSCs as a delivery vehicle for antibiotics is that their ability to do so is dependent on cell viability and integration at the injection site. By using DiI staining of ADSCs, a previous study showed that numerous ADSCs were distributed throughout granulation tissue up to 21\u00a0days post-transplantation13. We did not study cell viability and distribution after injection, which requires further study. Furthermore, in this in vivo study, the effect of antibiotic-loaded ADSCs alone was not assessed, but only loaded ADSCs combined with antibiotic were assessed. Therefore, the combinatorial effect of CPFX and ADSCs was shown, but the individual effect of ADSCs loaded with CPFX was not considered in this study protocol, since CPFX leaked out into the media before the cells could be administered. In the in vitro study, the concentration of CPFX in cells was higher than the MIC for S.aureus, but was substantially lower than the concentration of CPFX in the antibiotic group (100\u00a0mg/L). Therefore, we expected not only antibiotic delivery in cells, but also a synergistic or additive effect of the administered antibiotic and loaded ADSCs.The relevance of our findings to human subjects remains to be studied. In future, it will be necessary to confirm the effect of ADSCs on implant-related infection in larger animal models before clinical studies in humans can be conducted. For clinical application, the source of ADSCs is important. In our study, ADSCs were collected from allogenic rats. Autologous ADSC applications have some potential limitations. It is difficult to obtain sufficient quantities of healthy autologous ADSCs with high activity from patients with the targeted diseasesS.aureus infection in implant-related osteomyelitis. These findings suggest that local ADSC therapy combined with an antibiotic represents a novel treatment strategy for patients with implant-associated osteomyelitis. The results of this study highlight the potential use of this combined regimen in patients with implant-related osteomyelitis who responded poorly to conventional medical treatment.In summary, ADSCs can uptake antibiotics without suppression of antimicrobial peptide gene expression. Injected ADSCs exerted an antimicrobial effect, and local administration of ADSCs with CPFX suppressed chronic 13, with modification. BMSCs were isolated from the same rats, as previously reported51, with modifications in the protocol. Further details can be found in the ADSC and BMSCs for in vitro experiments were isolated from 20 9-week-old female Wistar rats . ADSCs were prepared as previously reported12, with modification. CPFX is a fluoroquinolone and is considered a drug of choice for the treatment of osteomyelitis because it penetrates into poorly vascularized sites of infection12. Cells were seeded in 96-multiwell plates at 10,000\u00a0cells/well in 100\u00a0\u03bcL of Dulbecco\u2019s modified Eagle medium (DMEM) per well. In the anti-proliferative assay, cells were incubated for 24\u00a0h or 7\u00a0days with various concentrations of CPFX . At the end of incubation, cell proliferation or viability was evaluated by MTT assay.The anti-proliferative and cytotoxic effects of CPFX (Wako) on rat ADSCs were determined by a 3--2,5-diphenyl-2-H-tetrazolium (MTT) assay as previously reported5\u00a0cells/mL) were plated in 100-mm dishes containing DMEM including fetal bovine serum (FBS) and CPFX (100\u00a0mg/L) for 10\u00a0min, 1\u00a0h, 12\u00a0h, or 24\u00a0h, as previously described12. At the end of the incubation, the cells were washed three times with phosphate-buffered saline (PBS). After loading with CPFX, the cell medium was changed to DMEM without CPFX. The concentration of CPFX released from ADSCs and BMSCs after the medium exchange was measured in 1\u00a0mL of medium obtained at 24, 48, or 72\u00a0h. The medium was exchanged each time after sample collection.ADSCs and BMSCs histochemistry was performed at 2\u00a0weeks after osteogenic induction culture. For ALP staining, cells were rinsed with PBS three times and fixed in 4% paraformaldehyde phosphate buffer (Wako) at room temperature for 5\u00a0min. They were then washed with deionized water. The fixed cells were incubated with 1-Step NBT/BCIP plus Suppressor Solution (Thermo Fisher Scientific) at 37\u00a0\u00b0C for 30\u00a0min, washed with deionized water, and observed both with the naked eye and under a light microscope . For alizarin red staining, cells were rinsed with PBS three times, fixed in 4% paraformaldehyde phosphate buffer, and stained using an Osteogenesis Assay Kit per the manufacturer\u2019s instructions.Antibiotic-loaded ADSCs and ADSCs were analysed for their capacity for osteogenic differentiation using ALP staining and alizarin red histochemistry. To induce differentiation, cells were cultured in osteogenic medium composed of \u03b1-MEM containing 10% FBS, 0.1\u00a0mM dexamethasone, 50\u00a0mM ascorbate-2-phosphate, 10\u00a0mM \u03b2-glycerophosphate, and 1% penicillin-streptomycinosteocalcin, rat ALP, and rat CRAMP was evaluated by qPCR. Briefly, RNA was extracted from the cells, and cDNA was generated using RNA to cDNA EcoDry Premix (Oligo dT) . qPCRs were run using THUNDERBIRD SYBR qPCR Mix and the following primer sets: 5\u2032-GACTGCATTCTGCCTCTCTG-3\u2032 and 5\u2032-ATTCACCACCTTACTGCCCT-3\u2032 for osteocalcin, 5\u2032-AACAACCTGACTGACCCTTC-3\u2032 and 5\u2032-TCCACTAGCAAGAAGAAGCC-3\u2032 for ALP, 5\u2032-GGTTCCGAGTGAAGGAGACTG-3\u2032 and 5\u2032-TACCAGGCGCATCACAACTG-3\u2032 for rCRAMP, and 5\u2032- ATCACCATCTTCCAGGAGCG-3\u2032 and 5\u2032-CCTTCTCCATGGTGGTGAAG-3\u2032 for rat glyceraldehyde-3-phosphate dehydrogenase (Gapdh)54. Target mRNA levels were normalized to that of Gapdh.The mRNA expression of rat 12. CPFX concentrations in cells were determined at 10\u00a0min, 1\u00a0h, 12\u00a0h, and 24\u00a0h after treatment with CPFX (100\u00a0mg/L) as described above, and those in CM at 24, 48, and 72\u00a0h after medium exchange. Furthermore, CPFX concentrations in cells after release were analysed at 24, 48, and 72\u00a0h. To assess potential adsorption of CPFX to the plate, we measured the concentration of CPFX in DMEM without cells after the changing the medium following a 24-h incubation with DMEM containing 100\u00a0mg/L CPFX, without cells. Detailed methods are described in the The concentrations of CPFX in ADSCs and CM, and in BMSCs and CM as a control, were quantified using LC\u2013MS/MS, as previously reported55. Antibiotic-loaded ADSCs and the CM of the antibiotic-loaded ADSCs were tested for their activity on S.aureus strain ATCC29213 . Further details of the method can be found in the The concentration of CPFX in antibiotic-loaded ADSCs was assessed using the broth microdilution method in cation-adjusted Mueller\u2013Hinton brothS.aureus strain ATCC29213 and air-dried for 20\u00a0min prior to insertion using a previously reported method56. The wound was then closed with nylon sutures. Seven days after the primary surgery, the rats were sedated and anesthetized again, and the surgical scar was reopened and irrigated with 10\u00a0mL of PBS. Rats were clarified for 4 groups: rats in the no-treatment group (n\u2009=\u20096) were injected using an 18-gauge needle locally into the surgical site with DMEM (2\u00a0mL), rats in the ADSCs-ant group (n\u2009=\u20096) with ADSCs-ant , and rats in the in antibiotic group (n\u2009=\u20096) with CPFX alone (2\u00a0mL of DMEM containing 100\u00a0mg/L CPFX), and rats in the ADSCs group (n\u2009=\u20096) with ADSCs alone (2\u00a0mL of DMEM containing 1\u2009\u00d7\u2009105 ADSCs/mL). In the ADSCs-ant group, ADSCs loaded with CPFX were washed, harvested by trypsinization with 0.05% trypsin, and resuspended in DMEM containing 100\u00a0mg/L CPFX for preservation until use so that the final CPFX concentration was 50\u00a0mg/L. The ADSCs, ADSCs-ant, and DMEM with CPFX were produced in the laboratory at the Department of Orthopaedic Surgery, Kanazawa University Graduate School of Medical Sciences, and then transferred to the laboratory at the Institute for Gene Research, Kanazawa University immediately. The rats were euthanized on day 7 post injection (14\u00a0days after infection) after evaluating the general impression and soft tissue swelling. After euthanization, abscess formation was evaluated to reopen the surgical scar.The protocol for establishing the implant-related infection model is shown in Fig.\u00a058 (Supplementary Table Rats were euthanized on day 14 post primary surgery (day 7 post injection), and the implants and femurs were harvested in a sterile manner for ex-vivo analyses. Osteomyelitis was scored by two examiners (Y.J. and Y.Y.) according to a modified score reported previouslyModified osteomyelitis scoring by two examiners was based on 1) general impression, (2) soft tissue swelling, (3) abscess formation, (4) proximal screw loosening, and (5) distal screw loosening. In case of disagreements between the two examiners, the lowest score was taken. Parameters 1\u20133 ranged from 0 (good or absent), 1 (mild), 2 (moderate) to 3 (bad or severe). Parameters 4 and 5 were judged based on micro-CT images, which was as follows: We calculated the degree of osteolysis as a healthy bone ratio . A mean ratio of >\u20090.7 was scored as 0, 0.6\u20130.7 was scored as 1, <\u20090.6 was scored as 2, and fracture was scored as 3. The maximum score to be achieved was 15 [5 parameters, 3 points maximum score . The femoral bone was automatically traced as the region of interest, and the Hounsfield Unit (HU) value was calculated59. The osteolytic volume of the screw hole was determined by calculating the total screw hole volume of the 5 slices and comparing the cortical bone (voxels\u2009\u2265\u2009230 Hu) within the total screw hole volume56. We calculated the degree of osteolysis as the healthy bone ratio .At post-surgery day 14 (day 7 post revision), the plated femurs were disarticulated, the implant and soft tissue were removed carefully, and the samples were subjected to \u03bcCT scanning at 10.5-micron resolution . To quantify osteolysis in the screw holes, \u03bcCT images in digital imaging and communications in medicine (DICOM) format were obtained for volumetric osteolysis analysis using the DICOM viewer software Synapse Vincent . The fixed specimens were decalcified in 10% formic sodium citrate solution, embedded in paraffin, and sectioned in the coronal plane at 0.2-\u03bcm thickness. The sections were stained with haematoxylin and eosin, and the slides were observed under an optical microscope . The abscessed area in the total area was evaluated in three regions, including at the distal screw hall, the proximal screw hall, and the region between the two screw halls. The assessment was confirmed by a pathologist (N.T.).60. Briefly, the implants were placed into 1\u00a0ml PBS in 1.5\u00a0ml microtubes. The solution was subjected to rapid vortex mixing for 15\u00a0s and then sonicated for 5\u00a0min at a frequency of 40\u00a0Hz to disrupt the formed biofilm. Finally, rapid vortex mixing of the solution was performed again for 1\u00a0min. This method of disrupting the biofilm was performed in accordance with the method reported by Braem et al.61, with slight modification. CFU assays were performed on the explanted proximal (sterile) screws, distal (contaminated) screws, and plates and soft tissues around the implant obtained on day 14 after surgery.The bacterial burden on the implants was determined by CFU assay following sonication, as previously describedt-tests. Multiple groups were compared using one of the following: ordinary one-way ANOVA followed by Sidak\u2019s post-hoc test , Brown-Forsythe and Welch ANOVA followed by Dunnett\u2019s T3 post-hoc test , or Kruskal\u2013Wallis test followed by Dunn\u2019s post-hoc test . P\u2009<\u20090.05 was considered significant. All analyses were conducted using Prism8 software .Data are reported as the median\u2009\u00b1\u2009interquartile range. The Shapiro\u2013Wilk test was used to check normal distribution, and the Bartlett\u2019s test was used to evaluate equality of variances. Means of two groups were compared using unpaired Student\u2019s The investigational protocol was approved by the Kanazawa University Advanced Science Research Centre , and all animals were treated in accordance with Kanazawa University Animal Experimentation Regulations.Supplementary information"} +{"text": "However, before the scheduled MRI scan could be performed, she developed tachycardia, for which the WCD alarmed. A dual-chamber implantable cardioverter-defibrillator was subsequently implanted. Assessment of a patient with syncope requires consideration of the idea that a life-threatening and recurrent arrhythmia may be a cause for the problem. However, current guidelines do not cover the routine use of WCDs in syncope. Additionally, the patient described here did not clearly meet United States Food and Drug Administration indications for the provision of an external defibrillator. We present this case to provoke discussion among colleagues regarding this patient\u2019s treatment plan.A 53-year-old female with a history of sports participation presented to a community hospital emergency department for collapse. She was given a LifeVest A 53-year-old female, who was a prior collegiate basketball player, presented to a community hospital\u2019s emergency department for syncope. During modestly intense kayaking with her daughter, her arms had suddenly felt numb and, after leaning forward and taking a few deep breaths, she fell out of the kayak and was pulled ashore by her daughter, who described her mother as having fluctuating consciousness over the ensuing half-hour. Two weeks before, she had had a sensation of dizziness prior to experiencing a total loss of consciousness while standing. She awoke shortly thereafter feeling well. She had no history of syncope and no significant family history. Her physical examination was unremarkable.. The QTc interval normalized with potassium repletion, but the patient continued to experience early- and late-coupled PVCs and rare monomorphic ventricular triplets. Echocardiography and cardiac catheterization findings were normal. A treadmill test showed multifocal PVCs; the QT interval was shortened with exertion. She was discharged on nadolol 60 mg daily, given a LifeVest\u00ae wearable cardioverter-defibrillator (WCD) , and scheduled to undergo a cardiac magnetic resonance imaging (MRI) scan with gadolinium enhancement at a tertiary center.In the emergency department, her serum potassium was 3.3 mEq/L and troponin was 0.045 ng/mL, and an electrocardiogram showed sinus tachycardia with a corrected QT (QTc) interval of 520 ms and monomorphic premature ventricular complexes (PVCs) , for which the WCD alarmed. Despite the fast rate (nearly 300 bpm), she repeatedly pressed the response button to suppress a shock. After more than nine minutes, she stopped pressing the response button but was still awake; at this point, she experienced two shocks, the second of which converted the tachycardia to sinus rhythm .However, before the scheduled MRI scan could be performed, she developed a tachycardia was initiated and demonstrated the presence of delayed myocardial enhancement in the midmyocardium and epicardium, suggesting a nonischemic origin such as myocarditis, amyloidosis, sarcoidosis, or some other form of nonischemic cardiomyopathy.1 This patient\u2019s scar was thought to be most consistent with a phenomenon of residual fibrosis from a past viral myocarditis.An MRI scan . The sotalol dose was increased to 120 mg twice daily and she was found to be noninducible via noninvasive programmed stimulation. Subsequently, she has been arrhythmia- and symptom-free.The location of the patient\u2019s scar suggested a substrate consistent with the morphology of her monomorphic ventricular tachycardia. A dual-chamber implantable cardioverter-defibrillator (ICD) was placed. Based on the fact that she was likely to have a recurrent tachycardia, sotalol 80 mg twice daily was initiated. The next day, she presented a slower but similar ventricular tachycardia that was halted with antitachycardia pacing Assessment of a patient with syncope requires the consideration of the idea that a life-threatening and recurrent arrhythmia may be a cause of the problem. Risk stratification is challenging. This patient\u2019s initial prolonged QTc interval raised the possibility of an inherited channelopathy. However, it was ultimately the patient\u2019s history that most suggested she was at-risk for sudden cardiac death. It is noteworthy that the patient\u2019s first known onset of syncope occurred late in life. More concerning was her fluctuating level of consciousness that occurred during her kayak outing. Malignant arrhythmia constitutes a plausible explanation for her symptoms.2 do not address the routine use of WCDs such as the LifeVest\u00ae in patients with syncope. Furthermore, the patient described here did not meet United States Food and Drug Administration indications for external defibrillator use. Indeed, the routine use of a WCD would be inappropriate in most patients presenting with syncope in the absence of structural heart disease. We suggest that a Bayes theorem approach to the risk stratification of patients presenting with syncope be considered (not unlike the risk stratification method employed in assessing patients presenting with the symptom of chest pain).Current guidelinesIn light of these considerations, we sought to request input from a panel of experts regarding the following questions:Do you agree with the WCD prescription in this patient? What considerations might help you decide one way or another? Would you have done anything else in addition or anything differently?Why do you think she did not pass out with a rate of 300 bpm?Would you recommend a single-chamber transvenous ICD, dual-chamber transvenous ICD, subcutaneous ICD, or something else for the treatment of this patient?"} +{"text": "The original Figure 2 published in this article mistakenly contained a duplicate of Figure 2A in the place of Figure 2B. This has now been corrected and the correct Figure 2B inserted."} +{"text": "Numerous time-course gene expression datasets have been generated for studying the biological dynamics that drive disease progression; and nearly as many methods have been proposed to analyse them. However, barely any method exists that can appropriately model time-course data while accounting for heterogeneity that entails many complex diseases. Most methods manage to fulfil either one of those qualities, but not both. The lack of appropriate methods hinders our capability of understanding the disease process and pursuing preventive treatments. We present a method that models time-course data in a personalised manner using Gaussian processes in order to identify differentially expressed genes (DEGs); and combines the DEG lists on a pathway-level using a permutation-based empirical hypothesis testing in order to overcome gene-level variability and inconsistencies prevalent to datasets from heterogenous diseases. Our method can be applied to study the time-course dynamics, as well as specific time-windows of heterogeneous diseases. We apply our personalised approach on three longitudinal type 1 diabetes (T1D) datasets, where the first two are used to determine perturbations taking place during early prognosis of the disease, as well as in time-windows before autoantibody positivity and T1D diagnosis; and the third is used to assess the generalisability of our method. By comparing to non-personalised methods, we demonstrate that our approach is biologically motivated and can reveal more insights into progression of heterogeneous diseases. With its robust capabilities of identifying disease-relevant pathways, our approach could be useful for predicting events in the progression of heterogeneous diseases and even for biomarker identification. Encapsulating a wealth of information regarding the prolonged or transient expressions of a large set of activated genes1, time-course data also helps us understand and model the dynamics of complex biological systems or phenomena, such as disease progression4. It offers us the possibility of deciphering the underlying pathophysiologies and systematic evolutions of human diseases3. A prominent goal in such studies has been to identify genes whose expression levels systematically differ between a case and a control group, and can be classified as biomarkers for diagnosis and prognosis of the disease.With the increasing affordability of high-throughput technologies, such as microarray and RNA sequencing, genome-wide time-course gene expression data has become one of the most abundant and routinely analysed type of dataFor over a decade, various methods have been introduced for modelling time-course data to identify differentially expressed genes (DEGs). Nonetheless, modelling, interpreting and validating the gene expression patterns are continually met with major challenges. The challenges can be largely classified into two categories: (i) robustly modelling the dynamics of time-course data and (ii) accounting for the heterogeneity of complex diseases.5, too few sampling times, missing time points, few or no replicates5, autocorrelation between successive time points6, and high-dimensionality with small sample sizes4. Some methods simplify the modelling task by disregarding the dynamic nature and making the expression profiles \u201ccoarse-grained\u201d4, such as cross-sectional analysis 7 and simplification strategies9. However, these methods are suboptimal. Interpolation methods, such as linear10 and B-spline (cubic spline)12, have been one of the first methods to be attempted for modelling the dynamics of longitudinal data and using them for estimating gene expression levels at unobserved time points7. Even though they incorporate the continuous nature of the data, they may be subject to issues, such as overfitting. In fact, B-spline-based methods require more than ten time points to produce reliable results6, which makes it unsuitable for applications in many biological studies4.Many methods have been proposed that deal with the most prominent limitations of modelling gene expression time-course data. Some such limitations include non-uniform sampling15; efficiently handling biological replicates, while accounting for subject-specific variability; including time-invariant and time-varying covariates; and determining the trends over time, as well as taking into account the correlation that exists between successive measurements16. Moreover, GP models offer a robust way of estimating missing or unobserved values by providing confidence intervals along the estimated curves of gene expression16. GP models can be used to identify differential expression between multiple conditions17 or handle general experimental designs18. They can also be designed to be robust to outliers and employ flexible model basis19. GPs capture the underlying true signal and embedded noise in a time-course gene-expression data in a non-linear manner, without imposing strong modelling assumptions. In addition to answering whether a gene is differentially expressed across the whole time-course, GP models have also been successfully applied for determining specific time-windows when a gene is DE even when no or few observations are made in that time-window21.Recently, linear mixed models (LMMs) and Gaussian processes (GPs) have become popular choices for time-course data modelling due to their ability of modelling the correlational structure of the data2. In fact, gene-level results from similar studies of heterogeneous diseases, such as cancers23, asthma, Huntington\u2019s diseases2, rheumatoid arthritis, type 2 diabetes, schizophrenia24, and Parkinson\u2019s disease24, have often been found to be inconsistent. They show distressingly little overlap between similar studies of the same disease26. Due to these challenges, many methods that summarise the results on a pathway-level have been developed, where the genes are unified under biological themes that aid in a functional understanding of the results. This can be further improved by developing personalised approaches for identifying enriched or disrupted pathways in complex diseases. Here, personalised approach refers to such methods that do not assume that changes are consistent across all study subjects but instead they identify biomarkers for each subject, e.g., by analysing each case-control pair separately; and a pathway is an overarching term for a group of genes unified under biological themes and are also referred to as gene sets in Subramanian et al.25.The traditional applications of these methods detect genes that exhibit different expression levels between a case and a control group (DEGs) across the whole study population. Unfortunately, in the case of heterogeneous data from complex diseases, only a few genes are usually found to be DE across all or most cases because different genes with similar functionalities may be found to be perturbed across cases, thus justifying the gene-level variability at a functional or pathway-level2 introduced a framework for personalised gene expression analysis, where personalised perturbation profiles (PEEPs) are constructed per case subject by calculating a z-score with reference to the control group and considering any gene with a z-score above an optimised threshold to be part of the PEEP. Using a combinatorial model on the PEEPs, they strive to identify a single pool of disease-associated genes that can be used to accurately predict the disease status of each subject. The method of Menche et al.2 thus accounts for heterogeneity. However, it is not directly suitable for modelling time-course data.Menche et al.25, are commonly applied to the gene-level results in order to obtain an understanding of the results at the level of biological processes. Several specialised methods have also been proposed for pathway-level analysis with two groups, such as module map22, CORGs27, Pathifier23, SPCA28, and PARADIGM29. However, only a few can be applied directly to time-course experiments. One such method is the unified statistical model for analysing time-course experiments at the pathway-level using linear mixed effects models30. This method directly identifies significant pathways expressed over time by using random effects to model the heterogeneous correlations between the genes in the pathway, as well as other fixed and random effects. Unfortunately, these methods do not apply a personalised approach for the modelling.Pathway (gene set) enrichment analyses, such as Fisher\u2019s exact test and GSEAIn this paper, we propose a method that models the time-course data in a personalised manner using Gaussian processes and combines the lists of DEGs on a pathway-level. Our method assumes an experimental design where each case subject is matched with a carefully chosen control subject, and the method uses a robust yet efficient method to detect DE genes for each individual with respect to the matched control. Individual-specific gene-level results are summarised at pathway-level using a permutation-based empirical hypothesis testing that is tailored for personalised DE analysis. To study expression changes associated with particular time periods, such as time before disease onset, we also extend the method to detect DEGs in specific time-windows. This method can be applied to longitudinal case-control data from different technologies, such as gene expression microarray, RNA sequencing and polymerase chain reaction (PCR), and to a variety of omics data types. To our knowledge, there are no competing methods to our proposed method.31. There is growing evidence that T1D is a genetically heterogeneous disease34. Therefore, in order to gain a robust understanding of the molecular mechanisms underlying this complex and heterogeneous disease, one needs to apply a personalised approach on a pathway-level like the one presented here. We report disruptions in pathways during the early progression of T1D , as well as in the 6 months windows before seroconversion (autoantibody positivity) and clinical diagnosis of T1D. Seroconversion is the time of autoantibody presentation in T1D susceptible individuals and represents the earliest (currently known) signs of disease progression. However, clinical diagnosis of T1D is established at a very late stage of the disease when insulitis has persisted over a long period of time36; ~80\u201390% of \u03b2-cells have been destroyed; and hyperglycaemia is achieved35. Therefore, identifying relevant perturbations at different stages of the disease can help in monitoring and perhaps predicting the significant events in the disease progression. Our personalised approach was able to identify various disease-relevant and interesting pathways from all three analyses, including those that illustrate the intrinsic mechanisms of disease progression. We also compared the results of the proposed personalised approach with those of a population-wide method, the original results from Kallionp\u00e4\u00e4 et al.31 and also a third T1D dataset from Ferreira et al.37. This method can be applied to other heterogeneous diseases with a similar experimental design and also extended to non-paired case-control datasets.We applied this method to largely two type 1 diabetes (T1D) microarray datasets from Kallionp\u00e4\u00e4 et al.Datasets 1 and 2 described in section on Data: early disease progression time-course (TC) analysis across the whole study period, time-series analysis within a window before seroconversion (WSC), and time-series analysis within a window before T1D diagnosis (WT1D). We compared the results obtained using our proposed personalised approach in each of the three analyses with those obtained using a combined method. Fig. In this paper, we present a personalised approach for identifying enriched pathways given time-course observations from multiple two-sample (matched case-control) pairs. We apply our method on gene expression microarray datasets with varying number of case/control observations per pair and uneven sampling times. We performed three types of analyses using joint and separate. In the joint model, a GP regression is fit to all samples from a case-control pair together (corresponds to the null hypothesis), whereas in the separate model, GP regressions are fit to cases and controls separately . We identified the DE features for each case-control pair separately by quantifying the fitting of each model using BF-scores and KL-scores in time-course and time-window analyses, respectively from each case-control pair separately by fitting two models, see Eqs. and 13)joint andc in Eq. . This waOur personalised approach is significantly different from the combined method where we compute the associated BF-scores and KL-scores per feature by pooling together all the cases and all the controls to form a set of combined cases and controls (assuming gene expression difference is homogeneous across the whole study population). The enriched pathways are then identified using a standard one-sided Fisher\u2019s exact test.A detailed description of the personalised and combined methods can be found in Methods section.Datasets 1 and 2) published by Kallionp\u00e4\u00e4 et al.31, were primarily analysed to understand pathway-level disruptions in T1D . We also perform TC analysis using our personalised approach on a different T1D dataset, (Dataset 3) published by Ferreira et al.37, to assess the generalisability of our method and results. All three datasets were generated by hybridising total mRNA extracted from venous blood cells on microarrays . Kallionp\u00e4\u00e4 et al.31 matched each case individual to a healthy control individual based on confounding factors, such as date and place of birth, gender and HLA risk class, and hybridised samples in batches based the pairing. Similarly, we paired cases and controls from Ferreira et al.37 based on time of birth, gender and sampling ages. The raw datasets were downloaded from public databases (see section on Data availability) and preprocessed using affy-packages and oligo-packages in R. More details are given in the Supplementary Notes and the respective articles.In this study, two T1D time-course gene expression datasets, (Dataset 1 comprised of 80 samples from six case-control pairs (43 case and 37 control samples) chosen from the sample series of seroconverted progressors, such that each pair was sampled before and after seroconversion of the case. Dataset 2 comprised of 188 samples from 15 case-control pairs (103 case and 85 control samples) chosen from the sample series of T1D progressors, such that each pair was sampled starting after seroconversion and till at least one month before T1D diagnosis of the case. Therefore, Dataset 1 was used for the early disease progression time-course (TC) and window before seroconversion (WSC) analyses; whereas, Dataset 2 was used for the window before T1D (WT1D) analysis. Dataset 3 comprised of 126 samples from 9 case-control pairs (79 case and 47 control samples), such that each pair was sampled before and after seroconversion of the case and each individual was sampled at least at three time points. All individuals chosen in each dataset consisted of 3 to 12 longitudinal samples each.38. We performed pathway-level analyses using 16808 (of 17786) pathways from the collection.For pathway information, we used the Molecular Signatures Database , which is a collection of annotated gene setsDatasets 1 and 2, the personalised approach resulted in an average of 895, 1127 and 1677 genes DE in the TC, WSC and WT1D analyses, respectively. On average, 14% of the DEGs overlapped between DEG lists of each case-control pair in the three analyses, thereby demonstrating heterogeneity among case-control pairs. The combined method resulted in 436, 234, and 563 genes as DE in the TC, WSC and WT1D analyses, respectively. The overlap of DEGs between the two approaches was significant in all analyses .Differentially expressed genes (DEGs) were identified in a direction-agnostic manner for pathway-level evaluation in all three analyses using both the personalised and combined approaches. In 2. Thirdly, even a gene that is not differentially regulated in most of the case-control pairs can be relevant on the pathway-level. Finally, the GP modelling was able to robustly interpolate over unobserved time points , which was especially important in time-window analyses where sometimes only a few or no samples were available for determining differential expression, as can be seen in Figs. The personalised approach accounts for the heterogeneity between the pairs in time-course and time-window analyses. Firstly, if probe-sets are used, the differential expression of a gene in a case-control pair could be attributed to any of its probe-sets regardless of the probe-set expressed in other pairs. Secondly, the dynamics of gene expression and even the direction of regulation of a DEG is allowed to vary from one case-control pair to another. Although unclear, certain genes may be behaving inconsistently across individuals due to the presence of certain other genes; or any deviation, regardless of the direction, could result in disease-associated perturbation possibly because of the mechanism of regulating the pathwayThe combined method, on the other hand, is more stringent when identifying DEGs in time-course and time-window analyses. For a gene to be identified as DE using this method, a feature is usually required to be DE in almost all of the pairs. Furthermore, if a gene exhibits different temporal expression dynamics or is regulated in opposite directions in different pairs, this model is unlikely to identify it as differentially expressed.PTPRN2 and HSPD1, from T1D pathway are shown in Figs. PTPRN2 encodes a major islet autoantigen in T1D, which plays an important role in insulin secretion in response to glucose stimuli by accumulating normal levels of insulin-containing vesicles and preventing its degradation39. HSPD1 is considered a pro-apoptotic or anti-apoptotic regulator of apoptosis, depending on the circumstances40, whose high-levels have been associated with diabetes, as well as increased expression of inflammatory genes and release of pro-inflammatory cytokines42. In the TC analysis of Dataset 1 using the personalised approach, case-control pairs 2, 7, 9, and 10 differentially downregulated only the PTPRN2 gene; pair 3 downregulated only the HSPD1 gene; and pair 8 downregulated HSPD1, but upregulated PTPRN2. Here, the pairs regulating HSPD1 differentially express different probe-sets of the gene, whereas all pairs regulating PTPRN2 differentially express the same probe-set. However, pair 8 upregulated PTPRN2 when other pairs downregulated it. Coincidentally, pair 8 is the only pair that expressed both PTPRN2 and HSPD1 in this data and it downregulated HSPD1 while upregulating PTPRN2, which may indicate correlation between the two. On the other hand, the combined method found significance only in the PTPRN2 gene since 5 of 6 case-control pairs differentially expressed the same probe-set. Moreover, Supplementary Figs. 1 and HLA_DPB1 (probe-set: 11760799_x_at) and IRF5 (probe-set: 11726687_a_at), where the case-control pairs regulate the genes in inconsistent directions. Here, the combined method identifies HLA_DPB1 as DE, whereas IRF5 is classified as insignificant. The personalised approach, however, identifies both of these genes as significant in all pairs.To illustrate the above-mentioned traits, the expression of the genes encoding the only two autoantigens that were differentially expressed in the TC and WSC analyses, Dataset 1 and WT1D analysis of Dataset 2, respectively. Similarly, 124, 307, and 2550 pathways were found to be significantly enriched with FDR\u2009<\u20090.1 in the TC, WSC and WT1D analyses, respectively, using the personalised approach with the enriched pathways found by the personalised approach in all analyses, as well as the enriched pathways identified in the TC analysis using the combined method , but overlapped insignificantly with the results from time-window analyses using the combined method in only the WT1D analysis, whereas our method found it in all three analyses. The interesting pathways discussed below that were identified using the personalised approach is illustrated in Fig. 31 and the combined method are highlighted with different colours.While analysing \u03b3 (IFN\u03b3) signalling, regulation of inflammatory process to antigenic stimulus, chemokine mediated signalling, and detection of other organism, in all three analyses suggesting their relevance at all stages of the disease (see Supplementary Data). Of these, Kallionp\u00e4\u00e4 et al.31 only identified immune response and IFN\u03b3 signalling related pathways as enriched (FDR\u2009<\u20090.05) in all analyses and detection of other organism pathway was found enriched (FDR\u2009<\u20090.05) in only the WT1D analysis.The personalised approach identified significant (FDR\u2009<\u20090.05) pathways related to immune response, interferon-31 in at least one of the analyses. These include the pathways related to cytokine mediated signalling, TNF signalling, regulation of dendritic cell (DC) differentiation, and DC maturation. However, in contrast to Kallionp\u00e4\u00e4 et al.31 results, the personalised approach was also able to highlight specific cytokine pathways that could be involved in the cytokine mediated signalling, as well as possible pathways necessary to regulate/conduct the immune response. In particular, IL-2 and IL-10 related pathways were enriched along with immunoglobulin production, and leucocyte mediated immunity.Multiple interesting overarching pathways were identified as enriched by the personalised approach uniquely in the time-windows right before seroconversion and T1D diagnosis, which were also found by Kallionp\u00e4\u00e4 et al.\u03b3 signalling was found significant at all stages of the disease, interferon-\u03b1 (IFN\u03b1) and interferon-\u03b2 (IFN\u03b2) signalling were enriched only in the TC and WT1D analyses using the personalised approach, whereas Kallionp\u00e4\u00e4 et al.31 associate their relevance at all stages. In addition, we found other T1D-associated pathways, such as PD1 signalling, IL-1 receptor binding, regulation of IL-4 production and positive regulation of B cell mediated immunity, to be enriched in the TC and WT1D analyses. However, Kallionp\u00e4\u00e4 et al.31 were unable to detect them.Intriguingly, the personalised methods found several pathways that were uniquely enriched during the early prognosis of T1D and in the 6 months window before T1D diagnosis. While IFN\u03b2, -2, -4, -5, -6, -10, -12, -21, -22, as well as the related overarching pathways, were found enriched in the 6 months before clinical onset of T1D, where more than half of the cytokine pathways were unique to this time-window.Furthermore, distinct disease-relevant pathways were determined as uniquely enriched before seroconversion, before T1D diagnosis or during the early stages of T1D progression using only the personalised approach. Specifically, pathways related to natural killer cell-mediated cytotoxicity and Fas signalling were found to be uniquely significant during the early stages of T1D progression and before seroconversion, respectively. Most strikingly, pathways regulating the production of multiple different pro-inflammatory and anti-inflammatory cytokines, such as Interleukin-1, -1Dataset 3, and performed Spearman\u2019s rank correlation test between the FDR values of all pathways obtained from analysing Datasets 1 and 3. The Spearman\u2019s rank correlation value (\u03c1) for all pathways was 0.203, which was found to be highly statistically significant with p-value\u2009<\u200910\u221215. The same correlation test performed on the 32 disease-relevant pathways (highlighted in Supplementary Data) found enriched in TC analysis using Dataset 1 resulted in \u03c1\u2009=\u20090.604, which was also found highly statistically significant with p-value\u2009<\u200910\u22123. Most importantly, many of the disease-relevant pathways found enriched in the TC analysis using Dataset 1 were found enriched using Dataset 3 as well; including the T1D pathway; and pathways related to immune response; interferon-\u03b1, -\u03b2 and -\u03b3 signalling; antigen processing and presentation; cytokine-mediated signalling; and IL-1 and IL-4 production.For assessing the generalisability of the results from our personalised approach, we performed TC analysis also on third independent dataset, namely Datasets 1 and 2, as well as TC analysis of Dataset 3 using the personalised approach. However, the combined method did not find it significant in any of the analyses and Kallionp\u00e4\u00e4 et al.31 found its significance only in the late stages of the disease, i.e., window before clinical onset of T1D. Fig. Datasets 1 and 2. These figures clearly illustrate that only a small fraction of the pathway\u2019s genes are differentially expressed (DE) in most of the case-control pairs and only a subset of these genes are DE in each child. Moreover, the subset of DE genes varies from one pair to another. It is not clearly understood how the presence of certain genes influence that of the other genes, therefore it is not easy to predict which genes in a pathway are selectively or necessarily expressed. When the T1D pathway genes were functionally divided into 3 main sub-processes: release and presentation of autoantigens; activation of CD4+, CD8+ T cells and macrophages; and apoptosis of \u03b2-cells, it was noticed that at least one gene from each sub-process was identified as DE in each pair. Some pairs did not differentially express any of the (auto)antigen encoding genes, which could indicate an environmental source of (auto)antigens instead of genetic. Similar phenomena may be expected from most other pathways. As an additional example, IFN\u03b3 signalling pathway has been depicted in Supplementary Figs. The type 1 diabetes pathway was found enriched in all three analyses of The combined method identified only those genes as DE that were DE in almost all the pairs Fig. . TherefoThe results of this paper demonstrate that a personalised approach of identifying differentially expressed genes (DEGs) and summarising them on a pathway-level can reveal more insight into the progression of heterogeneous diseases, such as type 1 diabetes (T1D), than commonly used non-personalised approaches that assume differences between cases and controls to be consistent across the whole study population, such as the combined method presented in this paper. Even though a significant number of pathways identified by the two approaches overlapped, the combined method was unable to identify the significance of most of the disease-relevant and interesting pathways that were identified by the personalised approach in all the analyses. The combined model identified DEGs in a strict manner that may also be biologically unrealistic, which probably impeded its ability to pinpoint most of the disease-relevant and intriguing pathways.Datasets 1 and 2) were compared to that of the results from Kallionp\u00e4\u00e4 et al.31, who analysed the same datasets using a rank product algorithm introduced by Breitling et al.43 for identifying DEGs, which cannot account for neither the dynamics of the time-course data nor the heterogeneity. Moreover, they estimated unobserved values in time-window analyses via linear inter-/extrapolation, where we applied Gaussian process modelling, which is known to be more robust. Significant number of pathways identified as enriched by the personalised approach overlapped with the Kallionp\u00e4\u00e4 et al.31 results. However, while Kallionp\u00e4\u00e4 et al.31 identified mostly the overarching pathways as enriched, the personalised approach recognised significance of the overarching pathways, as well as more specialised pathways that illustrate the intrinsic mechanisms by which the disease develops. Also, the analysis of Dataset 3 using our personalised approach demonstrated the generalisability of our pathway-level results concerning other T1D datasets.For validation, the results from the personalised approach maturation. Even though these pathways are highly relevant in the context of the disease, they mostly represent only the initiating events in the development of the disease: release of autoantigens; their uptake by antigen presenting cells (APCs), such as DCs, for antigen presentation in a complex with MHC class proteins44; and migration of DCs to pancreatic lymph nodes (pLN) to activate \u03b2-cell specific autoreactive T cells44, known as DC maturation46. Meanwhile, other important and disease-relevant pathways are underrepresented using the combined model.Considering that T1D is a complex autoimmune disease characterised by insulitis, the chronic inflammation of the pancreatic islets of Langerhans caused by autoreactive CD4+ and CD8+ T cells\u03b3 signalling and chemokine-mediated signalling as enriched. IFN\u03b3 is produced by autoreactive CD4+ and CD8+ T cells47 and is believed to play a key role in driving the autoimmune pathogenesis of T1D50, even though it is not considered solely a pro-inflammatory cytokine47. IFN\u03b3 also results in local upregulation of chemotactic cues that induce immune cell migration to the islets, for instance via chemokine mediated signalling, where \u03b2-cells produce certain chemokines that can accelerate or block T1D progression35. Fascinatingly, our approach also identified a pathway, \u2018detection of other organism\u2019, which connotes an existing postulation that environmental factors, such as microbial infections, can trigger the disease process leading to T1D in genetically susceptible individuals51.The personalised approach also finds the above-mentioned pathways enriched in its analyses, including immune response related and T1D pathways, along with many other disease-relevant pathways. In all the analyses, our approach identifies the pathways related to IFN\u03b1 has been linked to the development of T1D52; DC differentiation and maturation48; and cytokine-mediated signalling45, which acts like an all-encompassing, but vague, pathway for all cytokines. The method was able to determine additional relevant pathways in these two time-windows that were not identifiable by Kallionp\u00e4\u00e4 et al.31 results: immunoglobulin production, as well as IL-2 and IL-10 regulating pathways. In fact, it is the increase in production of islet autoantibodies or immunoglobulin that marks the seroconversion event in the life of an individual susceptible to T1D31. Meanwhile, enrichment of IL-2 and IL-10 signalling pathways before seroconversion indicates the possible anti-inflammatory processes that occur to resist the progression of the disease. IL-10 is an anti-inflammatory cytokine secreted primarily by Tregs and \u03b2-cell autoantigen recognising CD4+ T cells45. It inhibits the production of multiple pro-inflammatory cytokines, including IFN\u03b3, TNF-\u03b1, IL-5, IL-1\u03b2, etc.50, and is only marginally less prevalent in T1D patients studied at the time of diagnosis than in healthy subjects45. IL-2 is a cytokine that can lead to prevention or pathogenesis of the disease depending on its own concentration, the concentrations of other local cytokines55 and polymorphisms in the genes of its pathway45. In low dose, IL-2 signalling is believed to rescue insulin secretion55. However, it may result in accelerated autoimmune tissue destruction in the time-window before diagnosis due to the enriched regulation of IL-1 signalling in that time-window as it enhances IL-2 production54.One of the most interesting questions that are asked in T1D studies is regarding the changes that transpire in the time-window leading up to life-changing events, such as seroconversion and clinical onset of T1D. Using the personalised approach, multiple immunologically relevant pathways were revealed to be uniquely enriched in both the time-windows of interest, such as TNF signalling, where TNF-56, many pro-inflammatory cytokine regulating pathways were enriched, such as IL-1, IL-1\u03b2, IL-550, IL-650, IL-1246, IL-2157, IL-2250, IFN\u03b3, TNF-\u03b1. In the absence of IFN\u03b3 and TNF-\u03b1, cytokines IL-2, IL-1\u03b2 and IL-6 are considered anti-inflammatory55, but in their presence, these cytokines aggravate the inflammatory disease pattern, which is probably the case in the time-window before T1D diagnosis.Our results identify increased number of pathways enriched in the window before T1D onset as compared to the window before seroconversion, demonstrating the mayhem that precedes a clinical diagnosis. Especially, the number of cytokine regulating pathways were increased manifold, where more than half were unique to this time-window. Along with anti-inflammatory cytokines, such as IL-10 and IL-4\u03b1 and PD-1 signalling. IL-1 is a pro-inflammatory cytokine that enhances the production of IL-2, encourages B cell proliferation, and increases immunoglobulin production54; whereas IL-4 is an anti-inflammatory Th2 cytokine that inhibits autoimmunity by downregulating the production of pro-inflammatory cytokines, such as IL-1, IL-6, and TNF-\u03b156. Through mice studies, IFN\u03b1 and PD-1 signalling pathways have been established as important contributors to T1D pathogenesis from an early stage of the disease60. Where upregulation of IFN\u03b1 in pLN is an initiator of the pathogenesis59, upregulation of programmed cell death protein 1 (PD-1) signalling prevents T1D and promotes self-tolerance by suppressing the expansion and infiltration of autoreactive T cells in the pancreas61. In fact, blocking IFN\u03b1 signalling before clinical T1D onset has been shown to prevent \u03b2-cell apoptosis or even abort T1D progression58. In addition, PD-1 pathway has been proposed as a target for a new therapy for preventing and modulating autoimmunity61.Some of the pathways that were found enriched in the time-window before T1D diagnosis were also found enriched during the early stages of T1D progression using the personalised approach, possibly indicating that key players from late stages of the disease may already be detected at the early stages. These included both pro-inflammatory and anti-inflammatory pathways, such as those of IL-1 and IL-4, as well as IFN63. Similarly, Fas signalling pathway was found to be uniquely enriched before seroconversion. Since it is one of the pathways mediated by autoreactive CD8+ T cells that is directly involved in the destruction of \u03b2-cells46, it demonstrates that \u03b2-cell killing can be observed much before the clinical onset of T1D.Fascinatingly, natural killer (NK) cell mediated cytotoxicity pathway was found to be uniquely enriched during the early stages of T1D. NK cells are believed to be involved in multiple steps of the immune-mediated attack causing T1D as they are known to interact with antigen-presenting T cells, secrete pro-inflammatory cytokines and induce apoptosis in the target cellsEven though the personalised approach is able to identify many immunologically-relevant and disease-relevant pathways, it has scope for further development. The current implementation assumes Gaussian distributed data; it may be possible to improve the accuracy of differential gene expression detection for datasets that have notably non-Gaussian characteristics either by using a different likelihood model or by performing appropriate transformations. In addition, the proposed approach has been implemented for a matched case-control setting. However, with small modifications to the model, it could be extended to a non-matched case-control setting, where each case is compared to all the controls in the dataset.64. We denote N time points, T = . The GP is defined asf(T) = (f(t1), f(t2), \u2026, f(tN)). Here, \u03bc(t) is the mean which we assume as zero and \u03b8, i.e., \u03f5, where Gaussian observation is defined as\u2113se is the length-scale parameter that controls the smoothness and A Gaussian process is a generalisation of the Gaussian distribution. It can be seen as defining a distribution over functions and inference taking place directly in the space of functionsX, the measurement time points T and test time points T*, we obtain the posterior distribution KT,T = KT,T(\u03b8) for brevity and Given the observed data \u2113se) parameter of the squared exponential kernel, we specify a Gaussian prior . We chose the value of \u03bc to correspond to 30 weeks which results in a small probability of short length-scales and provides a reasonable range of feasible length-scales. The magnitude . The noise variance parameter is assigned a scaled inverse chi-square prior (\u03c32\u2009=\u20090.01 and \u03bd\u2009=\u20091) to restrict it to smaller magnitudes. We use the same (hyper)parameter priors for the case, control, as well as joint GPs.The gene expression data is first centred to zero by subtracting the mean of the data for GP fitting. This is done independently for the case, control and pooled (case and control) data. For the length-scale , for posterior prediction as proposed in Rue et al.65 and Vanhatalo et al.66, to approximate the exact marginal likelihood. CCD assumes a split-Gaussian posterior for log-transformed hyperparameters and defines a set of R points 66). We estimate the ML by using the R CCD points that are located around the high-probability region of the posterior but by replacing the split-Gaussian approximation used for posterior predictions with the exact product of likelihood and prior. In other words, we take the weighted sum of the posterior probability evaluated at the R points of the hyperparameter, which are weighted by the integration weights. For a model M with data X, the estimated ML is given by\u0394r is rth integration weight that corresponds to the volume of hyperparameter space allocated to the rth point. The obtained estimated ML for each model is then used to compute a Bayes factor score, which is used for model selection and for identifying differentially expressed genes (DEGs) as discussed below.The choice of hyperparameters has a significant effect on the resulting kernel with respect to smoothness and magnitude of the kernel. Computing the exact marginal likelihood (ML) is computationally intractable due to the marginalisation over the hyperparameters. Another approach to solving this problem, would be to simply maximise the ML with respect to the hyperparameters. Such an approximation is known as type II maximum likelihood (ML-II) and can lead to overfittingjoint and separate model to the expression data and identify which model better explains the observed data. The joint model involves fitting a Gaussian process over all the data points , whereas the separate model involves independently fitting a GP to only the data points corresponding to the cases and fitting another GP to only the data points corresponding to the control. After the fitting, model selection is performed to choose between the joint and separate model. If the joint model is chosen, we conclude that the case and control expressions for the specific feature comes from the same process and hence is not differentially expressed. Alternatively, if the separate model is chosen, we conclude that the case and control expressions for the corresponding feature comes from different processes and hence is differentially expressed. Assume two independent models, MA and MB, which are fit to the case and control time-course of a particular feature, xA and xB, respectively. Also, let a joint model, MS, be fit to the pooled data xS\u2009=\u2009. A standard statistical test would compare models MA and MB (separate models) against the joint model, MS. Hence, the null hypothesis would correspond to no differential expression and the alternate hypothesis would correspond to the presence of differential expression19.To identify if a feature is differentially expressed (DE) between a matched case-control pair, we fit a separate and joint models,67.To perform model selection, we compute a Bayes factor score for each feature and case-control pair separately. This is calculated as the log ratio of the marginal likelihoods of the In case of probe-set data, we now map the probe-sets to their corresponding gene names. If multiple probe-sets map to the same gene name, we choose the probe-set with largest BF-score to represent the gene. This is done independently for each case-control pair, which allows the flexibility of choosing different probe-sets between pairs to represent the same gene.\u03b8r; q(\u03b8r\u2223M) is the split-Gaussian approximative posterior; and T* defines a time discretisation for the 26 week time interval . Comparisons for the time-window predictions between separate (comprising of separate GP fittings for the cases and controls) and joint (single GP fitting the pooled case and control data points) models can be made by comparing the distributions using the Kullback\u2013Leibler (KL) divergence68. The Kullback\u2013Leibler divergence for any two distributions, p and q are the corresponding densities. To examine the expression level of a probe-set in the time-window, we compare the predictive distributions ; and one for the joint model, represented by MS) with dimension equal to twice the number of weeks in the time-window. In 69k is the dimension of the multivariate Gaussian, which in our case is 2\u2009\u00d7\u200926 (weeks); In addition to the TC analysis, we also detect disrupted pathways within certain time-windows. This approach could potentially be used to identify the pathways that are affected before a significant event in the prognosis of a disease and hence, can have applications in predictive medicine. The size of the time-window can be chosen as any appropriate duration. Here, we chose to detect significant genes by comparing the expression levels of features between each case-control pair in a 26 week (approx. 6 months) time-window prior to the seroconversion event and clinical disease onset. We compute the posterior mean and variance of the latent variables of the Gaussian processes within the chosen time-window, as described in Eq. , using ts in Eq. , evaluatThe symmetric KL divergence gives a KL-score for each feature. The KL-score can be written as:In case of probe-set data, we now map the DE probe-sets to their corresponding gene names. However, in the case of multiple probe-sets mapping to the same gene name, we choose the probe-set with the largest KL-score to represent the gene.adjusted geometric mean. Our enrichment analysis uses the number of DEGs from each case-control pair that overlap a given pathway. To account for the fact that a higher number of DEGs in a case-control pair leads to a higher probability of overlap with a pathway, we divide the raw number of DEGs from a case-control pair in a pathway by the total number of DEGs in that case-control pair. Thus, we compute the scaled pathway overlapfi,j for the jth case-control pair and ith pathway asi,j refers to the number of DEGs in the jth case-control pair belonging to the ith pathway, diff.exp.genesj refers to the number of DEGs in the jth case-control pair , and \u03b1 is a small constant . Assuming m case-control pairs, we define adjusted geometric mean of the ith pathway asadjusted geometric mean ensures that no case-control pair dominates the overall enrichment score and helps to take into account the different number of DEGs from each case-control pair.We propose an empirical hypothesis testing method that can identify statistically enriched pathways from DE genes (DEGs) that are identified for all case-control pairs separately. We define an overall enrichment score for each pathway using the DEGs from each case-control pair and a statistic we call adjusted geometric mean scores for each pathway are computed, we identify the statistically enriched pathways by performing a permutation test and obtain p-values for each pathway. Let G corresponds to the total number of features and m is the number of case-control pairs) such that Sg,j contains the BF-scores or KL-scores for the gth feature and the jth case-control pair. Our permutation strategy reorders the feature labels of the rows, which retains the possible correlations among the scores for the features across the case-control pairs. In other words, we fix the matrix S and shuffle just the associated features such that each row is randomly assigned a feature. In case of probe-set data, after the reordering (shuffling), the probe-sets are again assigned to gene names and the enrichment scores (adjusted geometric mean scores) are computed. This process of feature label shuffling and computing enrichment scores is repeated 100,000 times to get the permutation distribution that is used to compute the p-values. A lower number of permutations was used for Ferreira et al.37] dataset, which was also sufficient. The permutation distribution acts as the null distribution from which we empirically compute the p-value for a pathway.After the rom Eqs. . In case of probe-set data, the DE probe-sets are mapped to their corresponding gene names. To evaluate the enrichment of each pathway, we perform one-sided Fisher\u2019s exact test and compute p-values71.We compare our personalised pathway enrichment results with two standard approaches. In the first comparison, we imitate the standard approach of performing DE analysis at the population level and pathway analysis to act as a comparison with our personalised approach. We pool the gene expressions from all the cases and all the controls to obtain a single case-control set of readings, and then compute a single list of differentially expressed features. In this combined method, we again fit two different models in increasing or decreasing order and score a small geometric mean rank. It is a technique derived from biological reasoning. However, it does not account for the heterogeneity of the disease and it is not suitable for the dynamic analysis of time-course data. For TC analysis, expression values were first normalised for each case-control pair using the z-score and case-wise minimum, as well as maximum values are used to identify downregulated or upregulated features. For time-window analyses, in each window (WSC or WT1D), per feature fold changes between cases and matched controls were calculated using linear inter-/extrapolation and then used for rank-product analysis. See Kallionp\u00e4\u00e4 et al.31 for further details. In order to keep the pathway-level results from Kallionp\u00e4\u00e4 et al.31 and our approaches comparable, we performed one-sided Fisher\u2019s exact test on the gene-level results from all three analyses presented in Kallionp\u00e4\u00e4 et al.31 using the pathway information from MSigDB38.In the second comparison, we compare our personalised approach to the results published in Kallionp\u00e4\u00e4 et al.Dataset 1 and performed pathway-level inference on the noisy data to demonstrate the robustness of our method to noisy data , where N is the number of time points for a probe-set (for a single case-control pair). This is usually non-problematic as most time-course gene expression datasets have small sample sizes.Time complexity of GP modelling scales as Our personalised approach largely takes ~3\u2009h to calculate the differential expression scores for all the probe-sets and ~8\u2009h to generate the permutation distribution. Further details can be found in the Supplementary Methods.Further information on research design is available in the Nature Research Reporting Summary linked to this article.Supplementary InformationSupplementary DataSupplementary Information"} +{"text": "Stroke remains the leading cause of disability and death in the Philippines. Evaluating the current state of stroke care, the needed resources, and the gaps in health policies and programs is crucial to decrease stroke-related mortality and morbidity effectively. This paper aims to characterize the Philippines' stroke system of care and network using the World Health Organization health system building blocks framework. To integrate existing national laws and policies governing stroke and its risk factors dispersed across many general policies, the Philippine Department of Health (DOH) institutionalized a national policy framework for preventing and managing stroke. Despite policy reforms, government financing coverage remains limited. In terms of access to medicines, the government launched its stroke medicine access program (MAP) in 2016, providing more than 1,000 vials of recombinant tissue plasminogen activator (rTPA) or alteplase subsidized to selected government hospitals across the country. However, DOH discontinued the program due to the lack of neuroimaging machines and organized system of care to support the provision of the said medicine. Despite limited resources, stroke diagnostics and treatment facilities are more concentrated in urban settings, mostly in private hospitals, where out-of-pocket expenditures prevail. These barriers to access are also reflective of the current state of human resource on stroke where medical specialists serve in the few tertiary and training hospitals situated in urban settings. Meanwhile, there is no established unified national stroke registry thus, determining the local burden of stroke remains a challenge. The lack of centralization and fragmentation of the stroke cases reporting system leads to reliance on data from hospital records or community-based stroke surveys, which may inaccurately depict the country's actual stroke incidence and prevalence. Based on these gaps, specific recommendations geared toward systems approach - governance, financing, information system, human resources for health, and medicines were identified. The Philippines is an archipelagic nation with over 7,100 islands divided into three major island groups - Luzon, Visayas, and Mindanao, with its capital Manila located on the largest island Luzon .From 2009 to 2019, stroke remains the second leading cause of death and one of the top five leading causes of disability in the Philippines . The truThe Philippines' Local Government Code of 1991 has resulted in the devolution of different health services in the country, transferring the management of health systems from the national level to the provincial, city, and municipal level or the local government units (LGUs) . Thus, hContributing further to this challenge is the country's archipelagic nature, making health services delivery even more difficult. Geographically isolated and disadvantaged areas have limited access to health facilities. Added to this burden is the migration of health professionals to other countries searching for better wages, compromising the health care delivery . These hIn resource-limited settings like the Philippines, reporting comprehensive documentation of the current state of stroke care and identifying existing gaps and challenges can support the prioritization of measures to reduce the country's stroke-related mortality and morbidity. This paper aims to characterize the stroke care system in the Philippines using the World Health Organization (WHO) building blocks of the health system framework .The Philippine Department of Health (DOH) has recently enacted Administrative Order No. 2020-0059 or the National Policy Framework on the Prevention, Control and Management of Acute Stroke in the Philippines. The policy aims to develop protocols for diagnosis, treatment, related care, support, and establishment of referral pathways that are cost-effective and widely used. The policy further seeks to build capacity for acute stroke management and establish Acute Stroke Ready Hospitals . This alNational policies and legislations are also in place to address the risk factors associated with non-communicable diseases, including stroke. For instance, DOH developed a national multisectoral plan and a strategic action plan for NCD prevention for the years 2017\u20132025. Alongside this is the Philippine Plan of Action for Nutrition for 2017\u20132022, which includes the issues of overweight and obesity. The Philippine government also implemented tobacco and alcohol taxation through two republic acts in 2012 and 2017 , the country's national health insurance system, reimburses only USD 560 and USD 760 for ischemic and hemorrhagic stroke, respectively, both of which cover professional and healthcare institution fees . ThromboOn the other hand, rehabilitation costs after stroke can range from USD 53.50 to as much as USD 4,591.60, based on a 2015 study by Akhavan Hejazi et al. in Malaysia. The costs include those for attendant care, medical aid, travel expenses, medical fees, and out-of-pocket expenses . These cIn terms of preventive services related to stroke, PhilHealth mentions the inclusion of regular blood pressure measurements, counseling for lifestyle modification and smoking cessation, and several drugs for hypertension management such as amlodipine and losartan in its primary care benefit packages. Nicotine replacement therapy has already been included in the Philippine National Formulary Manual for Primary Healthcare, but its benefit package under PhilHealth has yet to be developed . On the The Philippine National Formulary (PNF) guides healthcare practitioners on the rational use of medicines and information on which drugs they can reimburse to the country's national health insurance system. Based on the 8th Edition of PNF, several drugs indicated for the prevention and management of stroke have included warfarin, aspirin, rTPA, clopidogrel, and dipyridamole . HoweverIn the Philippines, hospitals are classified based on ownership , the scope of services , and functional capacity . There aAside from acute stroke ready hospitals and units, only 452 rehabilitation centers cater to 148.1 stroke cases per 100,000 population , 25. OneOn the other hand, reported density of stroke diagnostic equipment such as computed tomography (CT) scan and magnetic resonance imaging (MRI) are low based on the 2013 WHO data on medical devices. Among Southeast Asian nations, Brunei, Thailand, and Singapore have relatively higher densities than the Philippines. The disparity grows bigger when compared to high-income countries such as Japan, Korea, and Canada .Emergency medical services (EMS) play a vital role in the management of stroke cases. However, EMS in the country is perceived to be fragmented and unstandardized . In an eSeveral preventive interventions targeting the behavioral risk factors of Filipinos for NCDs, which include stroke, have been implemented in the Philippines. The smoking cessation program of the DOH is one such intervention that includes giving advice to patients at the primary care level and referring to quit clinics at higher levels of care. In addition, the protocol of DOH provides for possible pharmacologic, psychological, and behavioral interventions to support the patient to stop smoking . In termHuman resources for health essential to the stroke care system include neurologists, neurosurgeons, physiatrists, stroke nurses, and other health professionals. In 2019, the Philippine Neurological Association reported over 400 adult board-certified neurologists and overThe health workforce for stroke care is an essential driver for the thrombolysis of patients with acute ischemic stroke. With 53 acute stroke ready hospitals and about 400 neurologists, the national thrombolysis rate ranges only between 2.40% in government and 3.33% in private hospitals , 32.To augment the need for human resources in stroke care, the Stroke Society of the Philippines together with the World Stroke Organization have conducted a five year nationwide stroke training to aid in organizing stroke teams and developing acute stroke ready hospitals and acute stroke units. Both doctors and nurses from different hospitals across the country joined the nationwide rollout of the training . Apart fAt the moment, stroke is included among the domains of the Philippine Department of Health \u2013 Unified Disease Registry System. However, information shared by the different health facilities, mainly from the government, remains limited. This limitation prompts the different hospitals to transmit data in different capacities resulting in a fragmented reporting system. Consequently, the said system generates less accurate estimates of the incidence and prevalence of stroke cases in the country.There is high mortality and morbidity of stroke in the Philippines, especially in areas outside the major cities where access to quality stroke care is limited. The geographical barrier to access further aggravates the gaps in the building blocks of the healthcare system. Addressing these gaps through a systems approach cutting across governance, financing, and service delivery, among others, is a critical precursor to improving overall stroke outcomes in the country.The passage of the national policy framework for stroke prevention, treatment, and management paves the way to consolidate and strengthen efforts for improving stroke care in the country. This should be integrated as top priority of the health sector . While tAlongside the policy, the government must continue to invest in improved stroke care by optimizing financing health services and packages through its national health insurance. Optimization of financing can include expanding and reviewing existing benefit packages and ensuring that medicines reimbursed are reviewed constantly for their effectiveness and safety. Likewise, financing must include reimbursements for patient-level expenses, but funds should also be allocated to improve system-level components of stroke care such as those mentioned in the national stroke policy (AO 2020-0059) of DOH as well In addition to policies and fiscal interventions, programs targeting the behavioral risk factors for stroke need to be scaled up to support the continuing decrease in rates of obesity and smoking in the country, as reported in the 2019 Expanded National Nutrition Survey . In partTo further support these recommendations, patient awareness must also be improved as it is crucial in ensuring an efficient stroke care system. In general, there is low stroke awareness across several regions in the country. In the SSP Guidelines for the Prevention, Treatment, and Rehabilitation of Stroke, a community survey reported that only 34.4% were knowledgeable on stroke, and respondents even misconstrued the disease as a heart attack . CommuniPatients' knowledge on when to act for stroke emergencies should be coupled with efficient emergency medical services (EMS). WHO has identified rapid EMS dispatch and rapid EMS system transport, and hospital pre-notification as key considerations in maximizing stroke patient recovery. In a study by Millin et al. effective rapid dispatch and access to EMS and stroke care are possible through the leadership of medical directors. Furthermore, such strategies include the assignment of catchment areas and criteria for identifying patients that need transportation. Defined pathways should also be available if a facility cannot accommodate the patient and secondary transfers are necessary .As the Philippine health system transitions to universal health care (UHC), establishing a stroke referral process and plan should be integrated into the healthcare provider network. Identification and coordination of referral centers with an organized stroke team that adheres to standard stroke care recommendations of the World Stroke Organization can contribute to achieving an efficient treatment pathway .On the other hand, in communities where neurologists are inaccessible, non-neurologists can augment the gap in service delivery through the facilitation of telestroke. Telestroke can improve patient outcomes through access to stroke specialists and increasing thrombolysis rate and has been shown to have comparable outcomes compared with a non-telestroke group of patients . InvestmResponsive to the changing landscape of healthcare in the face of a public health emergency, one way is to conduct regular virtual training of health care workers, especially those in geographically isolated and disadvantaged areas. Training also needs to be complemented with adequate and equitable distribution of health workforce responsive to Filipinos in need of timely treatment and management of stroke, including post-stroke care. With only one neurologist catering to 330,000 Filipinos based on the study of Navarro et al. , the PhiInteroperable stroke registry within and among public and private facilities across the country is also vital in improving stroke care. In a pilot stroke registry implemented in Nigeria, there was improved stroke awareness, better CT rate, reduced time to CT, reduced short term mortality, improved training and competence of interns and residents as well as better job satisfaction among neurologists . FurtherWith these recommendations, the systems approach, which includes active monitoring of indicators of the national stroke policy, can significantly improve stroke outcomes in the Philippines.MC, YZ, and DU took the lead in preparing the draft manuscript for publication. All authors participated in the data collection and analysis as well as provided input in developing the manuscript and approved the final version submitted.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher."} +{"text": "P = 0.023]; the percentage of total variance explained by PC1 increased . Post-HA, the loadings of the expired fraction of O2, CO2, and ventilation onto PC1 did not change, and the loading of heart rate increased . During the Recovery, the percentage of total variance explained by PC1 was higher than during the Baseline. Post-HA, there was a high correlation between the Exercise addiction scores and the eigenvalues of PC1 . Thus, acute hypercapnic exposure reveals the Post-HA increase in cardiorespiratory coordination, which is highly related to the level of exercise addiction.Coordination of cardiovascular and respiratory systems enables a wide range of human adaptation and depends upon the functional state of an individual organism. Hypoxia is known to elicit changes in oxygen and carbon dioxide sensitivity, while training alters cardiorespiratory coordination (CRC). The delayed effect of high altitude (HA) acclimatization on CRC in mountaineers remains unknown. The objective of this study was to compare CRC in acute hypercapnia in mountaineers before and after a HA expedition. Nine trained male mountaineers were investigated at sea level before (Pre-HA) and after a 20-day sojourn at altitudes of 4,000\u20137,000 m (Post-HA) in three states . A principal component (PC) analysis was performed to evaluate the CRC. The number of mountaineers with one PC increased Post-HA (nine out of nine), compared to Pre-HA (five out of nine) [Chi-square (df = 1) = 5.14, Hypoxia is known to cause changes in the brain regulatory circuits, leading to alterations in blood flow and ventilatory sensitivity not only to oxygen but also to carbon dioxide . Six houhreshold . Moreoveechanism . Cerebraltitudes . There altitudes and hormltitudes , increasltitudes , and eleltitudes after reIn general, the integration of the cardiovascular and respiratory systems can provide a wide range of adaptation of the human organism to varying environmental conditions and depends upon the individual functional state. The research approaches of modern physiology demonstrate the transition from the study of the organs and systems functioning to their interaction and integration, which help to improve prediction, particularly in sports physiology . CardiorWe have previously demonstrated that in swimming and skiing correlations between the responses of the cardiovascular and respiratory systems to acute hypoxic and hypercapnic tests are training-specific , 2017. LIn addition, we suggested that exercise addiction, which has been found in extreme sports , could bThus, the purpose of this study was threefold: (1) to compare the CRC in the hypercapnic test before and after the high-altitude expedition; (2) to compare the CRC before (at baseline) and right after the acute hypercapnia (during the recovery stage); and (3) to evaluate the correlation between the level of exercise addiction and CRC.The study included nine experienced healthy non-smoking male mountaineers aged 25\u201342 years. All subjects provided written informed consent prior to participation. The study protocol was approved by the Ethics Committee of the Scientific Research Institute of Neurosciences and Medicine (Novosibirsk) and performed in accordance with the Declaration of Helsinki.The mountaineers were examined twice in June and September at an altitude of 164 m above sea level (Novosibirsk). The first round of investigations took place prior to ascending to a HA (Pre-HA). Then the subjects sojourned for 20 days in the mountains, living under the camp conditions at the altitude of 4,100 m with short-term ascents to the altitude of 6,500\u20137,000 m . The second investigation, similar to the first one, was performed on average 2 weeks (range 8\u201328 days) after descending the HA (Post-HA).2 and 40% O2 was created in the bag. To avoid the increased activity of peripheral chemoreceptors, the hyperoxic O2 concentration in the bag was maintained. The hyperoxic mixture was prepared using a NewLife oxygen concentrator. The technique is described in detail elsewhere , hyperoxic hypercapnic rebreathing from a 5-L bag and breathing the ambient air again . The entire measurement procedure hence took about 17 min. The Read modified rebreathing procedure was used to create hypercapnia . An initlsewhere .2 (FiO2 and FeO2), inspired and expired fraction of CO2 (FiCO2 and FeCO2), and end-tidal O2 and CO2 partial pressure (PETO2 and PETCO2). Heart rate (HR) and blood oxygen saturation (SpO2) data were recorded by Pulse Oximeter BCI 3304 Autocorr and then automatically transferred to the Oxycon Pro. Office blood pressure was obtained by use of a sphygmomanometer . We measured skeletal muscle mass by a multi-frequency tetrapolar bioelectrical impedance analysis device .A spiroergometric system Oxycon Pro was used for recording the following respiratory parameters: minute ventilation (VE), inspired and expired fraction of OWe used the Exercise Addiction Inventory (EAI) to ident2 sensitivity during rebreathing as the slope in the regression line of VE vs. PETCO2 above the ventilatory threshold PETCO2.Data analysis was carried out using the STATISTICA10 software package (StatSoft). To evaluate the effect of hypercapnia on the separate cardiorespiratory variables we averaged the data that we received during the last 2 min of the baseline and recovery periods, taking into account the low-frequency fluctuations of the cardiorespiratory variables with a period of about 2 min . To desc2, FeCO2, and HR. Other recorded variables were excluded from the analysis due to their mathematical relationship with the above variables.To study cardiorespiratory coordination, we used principal component (PC) analysis, which reflects the degree of coincidence of temporal patterns of physiological responses, that is, how much their increase and decrease are statistically synchronized. The total variance allows us to represent the time patterns of selected cardiorespiratory variables with fewer coordinating variables or PCs. The PC is extracted in descending order of importance. The number of PC reflects the dimension of the system, so a decrease in the number of PCs indicates greater coordination and vice versa. The PC number changes when the system undergoes reconfiguration. PC analysis was performed for each mountaineer on the time series of the following selected cardiorespiratory Pre- and Post-HA variables: VE, FeOP-value < 0.05.The number of PCs was determined by the Kaiser criterion, which considers a significant PC with eigenvalues \u2265 1.00. In the tables, we give the eigenvalues as a percentage of the total variance. The greater this percentage is, the greater the coordination of the variables projected onto PC appears. To analyze the effect of HA, the PC eigenvalues pre- and post- HA were compared over the entire 17-minute measurement. To find out the effect of hypercapnia on CRC, the PC eigenvalues for the time series of two states (Baseline and Recovery) were compared. A Wilcoxon Matched Pairs Test was performed to assess statistically significant differences in the cardiorespiratory variables, eigenvalues and PC loads Pre- vs. Post-HA and between the Baseline and Recovery states. The frequencies of occurrence were compared by the chi-square criterion. To test the suitability of the selected cardiorespiratory data for structure detection, Bartlett\u2019s test of sphericity and the Kaiser-Meyer-Olkin Measure of sampling adequacy was used. The relationship between the level of exercise addiction and the eigenvalues of PC1 was evaluated by the Pearson correlation coefficient. To evaluate whether the Post-HA results were influenced by the delay following the high-altitude exposure, we calculated the Pearson correlation coefficient between the days of delay and the variables characterizing the CRC. Statistical significance was considered at a 2, and muscle weight 34.6 (31.9/38.8) kg. The Exercise Addiction score was 18.0 (17/19) with a range 15\u201321.The anthropometric descriptive characteristics of the subjects expressed as medians (Q1/Q3) are as follows: height 176 (171/180) cm, body weight 70 (65.7/83.1) kg, body mass index 23.8 (21.7/25.7) kg/m2 was lower than that at baseline, but the difference was less than 4%. Post-HA parameters at recovery and baseline were not different.Post-HA baseline blood pressure and recovery heart rate decreased significantly as compared to pre-HA . There wp < 0.01) and the Kaiser-Meyer-Olkin Measure of sampling adequacy showed the suitability of the selected cardiorespiratory data for structure detection. The PC analysis of the entire Pre-HA measurement revealed five mountaineers with one PC and four mountaineers with two PCs. Post-HA, the number of mountaineers with one PC significantly increased to nine . There were no participants with two PCs Post-HA. Since we used the Kaiser criterion to determine the number of PCs in the model, the following PC had eigenvalues < 1, i.e., they explained less variance than the original variables. Since one PC includes coordinated variables, one can conclude that the coordination of cardio-respiratory variables in the hypercapnic test increases Post-HA.Bartlett\u2019s test of sphericity formed PC1.The loadings of VE, FeO Post-HA . This inThe entire measurement segmentation by the states Baseline, Rebreathing, and Recovery revealed the maximal percentage of total variance explained by PC1 during Rebreathing . During r) between the exercise addiction score and the percentage of total variance explained by PC1 was 0.67 (p = 0.049), Post-HA r = 0.90 (p = 0.001) , weight , BMI , and muscle mass .Pre-HA, the Pearson correlation coefficients (= 0.001) . The slor = \u22120.86; p = 0.003; regression: \u0394% Total variance = 16.9\u20131.3 \u2217 Days Post-HA). The regression line crosses the zero level on the 13th day. We did not find any significant correlation between \u0394% Total variance and the delay following the HA exposure during Baseline , Recovery and Entire measurement . There were no significant correlations between the loadings of VE, HR, FeO2, FeCO2 onto PC1 and the delay after HA expedition .There was a negative correlation between Post- and Pre-HA difference in the percentage of total variance explained by PC1 on the one hand and the delay following the high-altitude exposure during Rebreathing on the other hand , the increase in the percentage of variance explained by PC1 , and an increase in the heart rate loading onto PC1. This is consistent with the results obtained by the same method, which has shown the higher sensitivity and responsiveness of cardiorespiratory coordination to exercise effects compared to isolated cardiorespiratory outcomes , 2019a. The negative correlation between the Post- and Pre-HA difference in the percentage of total variance explained by PC1 and days Post-HA during Rebreathing indicates that the Post-HA rebreathing CRC was influenced by the delay following the high-altitude exposure. Meanwhile, we did not find any relationship between Post- and Pre-HA difference in CRC and days Post-HA during Baseline, Recovery, and Entire measurement. Thus, on the one hand, the large variability in the Post-HA exposure re-evaluation delay is a limitation of the study, but on the other hand, this allowed us to estimate the duration of the effects of HA acclimatization on the CRC after returning to the sea level.2 increases above the ventilatory threshold . However, hypercapnia is known to increase the amplitude of RSA . The patients/participants provided their written informed consent to participate in this study.SK: conceptualized the research question, study design, and supervised the entire project. DU: performed the data analysis. VG and MZ: drafted the manuscript. VM and NB: collected data. All authors interpreted the results and critically reviewed and significantly contributed to the manuscript and approved the final version.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Little is known about the relationship of active travel to school (ATS) with physical activity (PA) and screen time (ST) by individual and parental characteristics among adolescents, especially in China. To address the research gap, this study aimed to explore the difference of sex, age, living environment, parental occupation and education level in the relationship of ATS with PA and ST among students of grades 7\u201312 (aged 10\u201318 years) using cross-sectional data. In 13 cities of Hubei province, China, students from 39 public schools were recruited to engage in the survey. In total, 5,898 students (response rate = 89.6%) were invited into this study. Participants were required to report their ATS (including its types), PA and ST as well as sociodemographic information using a validated questionnaire. Descriptive analyses were used to report the information of all variables. Regression models were used to analyse the relationships of ATS and its types with PA and ST. In a total of 4,128 participants included in the final analysis, the proportion of those with ATS was 47.3%. Regarding the types of ATS, walking accounted for over 30%, while cycling was 13.2%. Participants with ATS were more likely to have sufficient PA , especially among boys, younger adolescents and those with lower parental education level. However, ATS was not associated with ST . Participants with cycling had a higher odds ratio of being physically active . The association of ATS types with PA and ST differed by gender, age, living environment and parental educational level as well as occupations. ATS may be a useful approach to increase PA among adolescents, but this should be explained by individual and parental characteristics. A markedly ubiquitous international trend is that only a small proportion of adolescents fulfil the pervasively recognised physical activity (PA) and screen time (ST) guidelines \u20134. RecenGiven the insufficient PA and higher ST among adolescents globally, researchers have sought effective interventions to change these harmful health behaviours , 13. To With the increasing importance of ATS on overall PA, Chinese researchers have been devoted to ATS-related studies among Chinese young people. For example, Sun et al. exploredThe ATS, PA, and ST are associated with substantial differences in individual and parental characteristics among adolescents. For example, Yang et al. reportedCertain parental characteristics, for example, education and occupation, are associated with school travel choices. For instance, higher parental educational attainment represents advantages in terms of income, with those parents potentially more likely to deliver their children to school using private vehicles , 18, 21.Therefore, as a means of contributing to resolving the evidence shortcomings in the extant literature and to develop an evidence foundation for PA- and ST-focused interventions for Chinese adolescents, this research aimed to analyse the relationship of ATS with PA and ST by the individual and parental characteristics among adolescents in China.Declaration of Helsinki.This cross-sectional study was a questionnaire survey, undertaken between May and June 2019.The sample of participants was selected in Hubei province of China. Contact was made with the commission of education in 13 cities of Hubei province. Applying a convenience sampling method, we invited three public schools per city to participate in this survey. Through the administrative support provided by the commissions of education, 3rd to 12th-grade students in 39 public primary, middle, and high schools were selected across all the cities. In total, 6,583 students were invited to participate in the survey. Of these, 5,898 responses with a complete self-reported questionnaire, providing a response rate of 89.6%. The research protocol and procedure were approved by the Institutional Review Board (IRB) of the Wuhan University of Technology in March 2019. The student participants and their legal guardians provided written consent. The anonymity and confidentiality of participants were ensured following the The participants were asked, during their break time on school days, to provide self-reported data regarding sex , grade (from 4 to 12) and current living environment . A self-reported questionnaire was implemented to collect information from parents, including parental educational attainment , and occupation . The parents completed the paper-based questionnaire at home using a pencil.active, whereas others were considered passive. According to the type of ATS, participants selecting 1 and 3 were regarded as walking; 2 and 4 were regarded as cycling; 5 were regarded as using school bus; 6 were regarded as delivered by parents, and 7 were regarded as using public transportation.A single-item question was included for measuring the ATS, which asked the participants: \u201cIn the last 7 days, how did you usually get to your school?\u201d . This mePA was measured using the items derived from the Health Behaviour School-aged Children (HBSC) questionnaire, which has acceptable validity and reliability in the Chinese context . Two iteThe following items of the HBSC questionnaire were used to obtain information relating to ST : (1) \u201cHop < 0.05 was established as the level of statistical significance. The statistical analyses procedures were performed using SPSS 24.0 .Before initiating the formal analysis process, all responses with missing data were omitted from the sample. We opted to concentrate our analysis on 10- to 18- year-old adolescents (grades 7\u201312) only, with data about further school grades being excluded. Ultimately, the final sample size, included in the analysis, was 4,128. Descriptive statistical analysis was applied to report the percentage of sociodemographic variables , exposures and outcomes . A Chi-square test was performed to investigate the difference in PA and ST by ATS (its types), alongside the individual as well as parental characteristics. Logistic regression was undertaken to analyse the relationship of ATS and its types (exposure) with PA and ST (outcome). All variables were incorporated into the regression analysis as categorical variables. This research presents the logistic regression results as odds ratios (ORs) with 95% confidence intervals (CIs). When examining the relationship of ATS and its types with PA and ST, all sociodemographic variables were adjusted in the models, and n = 4,128), boys accounted for 50.9%. Younger adolescents accounted for over 60% of the participants. Over 70% of the participants lived in urban areas. Approximately 70% of participants had parents who were office worker or had an education degree of less than college/university. The proportion of participants using the passive mode of ATS was 52.7%. Regarding the types of ATS, participants who walked to school accounted for 34.1% (the largest proportion), while participants using the school bus made up the smallest proportion (0.8%). The prevalence of sufficient PA and limited ST was 17.3 and 67.9%, respectively.p < 0.001). Participants taking cycling as ATS mode had the highest percentage of sufficient PA compared to others . There was no significant difference in the levels of limited ST among participants with different ATS groups. Participants taking the school bus as their mode of ATS had the lowest percentage of limited ST compared to those using other types of ATS .In p < 0.005). However, there was no significant difference in PA between the passive and active ATS groups among older adolescents (p = 0.116). Further, a significant difference was observed across groups of different types of ATS irrespective of sex, age, and living environment for limited ST. However, there was no significant difference of ST across groups of ATS except for younger boys (p < 0.001).The results of the difference of PA and ST by ATS and its types as well as individual characteristics are shown in p < 0.001; college/university or higher: 18.7% > 15.2%, p = 0.006), and particularly, participants who selected cycling had the highest percentages compared with the other ATS types. In terms of limited ST, participants who were delivered by parents had the highest percentages . Similar to the parental education level groups, more participants using an active form of travel to school had sufficient PA regardless of their parent's occupations compared with those with passive ATS , and participants using cycling had the highest percentages compared with other ATS types . Participants who were delivered by parents had the highest percentages of limited ST compared to those with other ATS types .In In To the best of our knowledge, this study is one of the first cross-sectional investigations into the relationship of ATS and its types with PA and ST among adolescents of the Chinese samples. This research analyses the correlation of ATS and its types with PA and ST about different individual and parental characteristics, which potentially offers significant practical implications and advances the knowledge in this field.In this current study, the proportion of the sample engaging in ATS was 47.3%, with participants who walked (34.1%) constituting a greater percentage than those who cycled (13.2%). Regarding the question relating to levels of adequate PA and limited ST among the adolescents, only 17.3% attained the former, whereas the latter was under acceptable levels. Furthermore, we found that participants engaged in ATS had a greater chance of also participating inadequate PA levels, while ATS was not connected with limited ST. Regarding types of ATS, participants engaged in walking or cycling both had a higher chance of undertaking sufficient PA rather than being characterised by limited ST. The relationships of ATS and its types with PA and ST differed according to individual and parental characteristics.Low PA levels among adolescents have been demonstrated across the literature , 4, 32. The current study indicates that just below half of the participants (47.3%) engaged in ATS from home to school, which corroborates other studies on Chinese , 24 and Our research is one of the few investigations assessing types of ATS among Chinese adolescents. Among the participants engaged in ATS, the majority walked between school and home. This finding is consistent with previous studies . A StudyReflecting the findings of numerous previous studies , 27, 28,Accordingly, the mechanism connecting ATS and PA varies depending upon the social and environmental factors. Therefore, it is advocated that further studies should investigate the correlation of ATS and PA to a greater extent, implementing enhanced study designs and multiple data sources. Nevertheless, this study finds that ATS was not associated with ST among adolescents. A systematic review indicated that the relationship of ATS with ST was inconsistent across the literature , meaningWe further discovered that the relationship of ATS with PA varies according to sex, age, and parental education. Specifically, the relationship between ATS and PA was found to be significant among boys, suggesting that only boys engaging in ATS have potentially involved insufficient PA . HoweverThe present research found that younger adolescents, as opposed to their older peers were engaged more in ATS r and had a greater chance of also participating insufficient PA. This result is explainable according to the Chinese context. In our study, older adolescents were those in grades 10\u201312, which is a vital period that covers the college entrance test. During this period, adolescents tend to be delivered to school by their parents for time-saving reasons, allowing a greater amount of time to be spent on studying. Conversely, such a situation would not arise among younger adolescents due to lower academic pressure.Among participants whose parents had lower educational attainment, those engaged in ATS had a greater prospect of attaining sufficient PA levels. Nevertheless, this significant relationship was not detected among participants whose parents had higher educational attainment levels. A potential reason is that parents with lower levels of educational attainment have lower levels of income compared with their counterparts. Consequently, it is less likely that those parents possess automobiles, which potentially causes their children to travel between home and school through walking or cycling.This research has established that the relationships between types of ATS with PA vary according to individual and parental characteristics. We primarily identified that participants engaged in walking or cycling had a greater likelihood of participating insufficient PA compared with their counterparts. Unsurprisingly, when assessing the odds ratio of walking and cycling about sufficient PA, the former is lower than the latter. This finding is consistent with the results of the study of Roth et al. , where tA similar relationship was identified among boys, younger adolescents, participants whose parents had reduced education levels or were office workers. Significantly, only girls engaged in cycling showed sufficient PA in our study, providing an inconsistency with the results for boys. Future studies should attempt to answer this difference for the relationship of types of ATS with PA based on the different subpopulations' characteristics and other factors.A noticeable distinction is apparent in the relationship of types of ATS with PA according to the living environment. Specifically, in urban areas, adolescents engaged in cycling were more likely to be involved insufficient PA, whereas adolescents in rural areas who were engaged in walking had a greater prospect of undertaking sufficient PA. Unfortunately, no data provided by our study were able to clarify the variation. The differences in built environment between urban and rural settings may provide a plausible explanation for this. A similar finding was established for participants with different parental education levels and occupations. Given the limited evidence across the extant literature, more studies should address these research questions in the future.When looking at the difference of individual and parental characteristics in the relationship of types of ATS with limited ST, some novel findings should be mentioned. In the current study, participants who walked and were delivered by parents had higher odds of having limited ST compared with their counterparts. To our knowledge, no data provided by previous studies were comparable with our study findings. Some possible reasons for explaining the findings are that (1) participants who walked to school could be regarded as a lower level of socio-economic status, and may not be able to afford as many screen-based devices, ultimately reducing their ST; (2) participants delivered by parents would be exposed to much stricter supervision that limits the time spent for screen-based activities. Owing to less research, more studies should confirm our assumptions. In addition to the overall findings, the individual and parental differences in the relationship of types of ATS with ST were found. Such variations should be explained by more factors, whereas the present study failed to provide this indicative information, future studies should answer the variations found by our study, which can provide more specific practical implications.Irrespective of the preliminary nature of the evidence regarding the variations in the relationship of ATS and its types with individual and parental characteristics, the present study affirmed the role of ATS in promoting PA among adolescents. This research has expanded on prior research, demonstrating that individual and parental characteristics show different relationships regarding ATS compared to PA. Nevertheless, given that ATS and PA are two complex behaviours that are affected by numerous variables, the relationships of ATS and its types with PA according to various characteristics requires further replication and clarification of the results to provide more robust evidence. Practically, the current study should prove beneficial in encouraging adolescent PA through ATS, with the design of effective policies and actions being necessary.When designing ATS interventions for enhancing PA among adolescents, individual and parental characteristics must be considered. It is recommended that future research should concentrate on the mechanisms linking ATS with PA within various contexts (for example sex and living environment). Based on the cross-sectional nature of our research, prospective longitudinal studies are necessary to confirm the relationships we observed, as well as to elucidate whether a potential causal relationship is apparent between ATS and PA. Moreover, it is necessary to undertake further experimental research to analyse the extent to which ATS interventions offer further effectiveness for promoting PA among adolescents.This study offered certain advantages. First, the study adopted a relatively large sample size as a means of investigating the relationship of ATS with PA and ST, thus enhancing the generalizability of research findings. Second, the present study is one of the very limited number of investigations analysing the associations of ATS with PA and ST as they relate to different individual and parental characteristics. The research findings are potentially beneficial for designing specific PA and ST interventions. However, there are certain limitations to this study that should be clarified. Due to the cross-sectional nature of the study, the findings of the study should be interpreted with caution. Moreover, our study used a self-reported questionnaire to assess the ATS, PA, and ST, which was subjective to recall bias of measurement. Third, this study did not include/explore more potential confounders, such as time spent during ATS and car ownership, that may affect the relationship of ATS with PA and ST. Fourth, to better explain the relationship of ATS with PA or ST, additional psychological , social and physical environmental factors should be considered for more reliable interpretations. Finally, the ATS and PA, as well as ST, are also affected by other factors, such as income level. We recommend further study of the relationship of ATS with PA and ST by income and other sociodemographic factors, particularly of longitudinal nature. Future studies should address these limitations to provide an improved evidence base.Overall, our study indicated that approximately half of the adolescents engaged in ATS, with a majority of participants preferring to travel through cycling between home and school. ATS among adolescents was linked with sufficient PA as opposed to ST. Therefore, the relationships of ATS and its types with PA require further clarification by different contexts, including sex, age differences, and parental characteristics. Nevertheless, this study retains its specific significance to the implementation of PA promotional activities among adolescents.The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.The studies involving human participants were reviewed and approved by Wuhan University of Technology. Written informed consent to participate in this study was provided by the participants' legal guardian/next of kin.CH, YL, and S-TC conceptualised and designed this study. CH, JY, and YL analysed interpreted data and drafted the manuscript. AM and S-TC provided important intellectual roles in revision. All authors read and approved the final manuscript.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher."} +{"text": "With the continuous development of China's cultural industry, people's health has become one of the topics of the highest concern. Therefore, all the application models of physical health test data in the actual analysis have become the current research focus and trend direction of healthy constitution. This paper summarizes the significant problems in the analysis of physical health test data, through the comprehensive analysis and investigation of physical health test data, combined with the measurement of the test indicators, through the analysis and processing system of youth physical health data, the use process of national youth group physical health standard data management software, and decision tree intelligent algorithm in physical health. The research steps of test data analysis and application model summarize the application characteristics of physical health test data in the application process. Based on this, a decision tree intelligent algorithm is proposed, and the corresponding functions and optimization formulas of the algorithm are substituted. In the process of actual sample checking calculation, each weight range and corresponding errors are inferred and analyzed by combining examples. This paper summarizes the application model and optimization model of health test data analysis based on decision tree intelligent algorithm. Through the repeated test of the research data, the feasible area and application scope of the algorithm are obtained, and the practical optimization scheme and application ideas under the algorithm are obtained. With the continuous expansion of our country's control over physical health, our achievements in physical health testing have gradually become the focus of attention of all sports groups and show an excellent development trend. However, in the application process of the data analysis of the physical health test, there are often some abnormal problems such as the athlete's physical health test mode and incomplete data analysis, which are obviously reflected in the data analysis process of the physical health test. Therefore, in the questionnaire survey, we can use association rules to explore the rules in the actual physical health test process and take this as the analysis data to calculate the actual evaluation index and its health rules under the quantitative table and draw the conclusion that each worker in the questionnaire has two or more positive psychological problems, mainly manifested as depression and obsessive-compulsive disorder. According to the particularity of the symptom, an effective scheme under the association rule algorithm is inferred .This paper focuses on the current detection modes of physical health test, discusses how to reflect the difference of physical health test data and its physical condition comprehensively, analyzes the decision tree intelligent algorithm and the application model of physical health test data analysis under the algorithm in combination with various vector and variable indexes, and finally summarizes the optimal model of physical health test data. On this basis, the algorithm function of each research index is substituted to carry out the survey of each test area, and the element indexes under the behavior detection algorithm are integrated and classified. After repeated verification, combined with relevant parameters, the key measurement steps in the research are applied to get the actual optimized operation mode under the benefit.The innovative contribution of this paper is that, based on the development status of health testing, an innovative decision tree intelligent algorithm and its optimized application function are proposed, which is substituted into the analysis process of physical health testing. Through data analysis, the application distribution range and actual operation mode of each physical health test data are obtained, and the basic upgrade indicators under this test are summarized. Based on the algorithm, various optimization modes are investigated on the spot, and the benefit analysis is carried out according to the actual situation. By introducing various parameter data and experimental results, the application model of decision tree intelligent algorithm in physical health test data analysis is summarized.The structure of the article is divided into three parts. Among them, the second part focuses on the comprehensive discussion of the decision tree intelligent algorithm mode based on the analysis of the physical health test data, including the decision tree intelligent algorithm and the physical health test data analysis system; the third part carries out the field investigation and questionnaire survey on the established decision tree intelligent algorithm model and calculates the specific analysis of the physical health test data under the algorithm. The application form, through continuous checking and comprehensive analysis, summarizes the corresponding experimental data and research results, so as to prove the optimal research model under the algorithm and finally collects all the effective experimental data integration to build the optimal application model of decision tree intelligent algorithm in the analysis of physical health test data.In recent years, many scholars and researchers have carried out detailed research on decision tree intelligent algorithm and physical health test data analysis system and obtained the corresponding research results after substituting each index and step into the research data. Li et al. analyzed the actual classification and attribute characteristics under the form according to the employment mode of each college student, determined the data processing rules and employment form characteristics under the attribute, combined with the actual mining technology of preprocessing and information data, and summed up the intelligent algorithm model and optimization form of the decision tree under the simple structure . By analBaldwin et al. proposed the application of ID3 learning algorithm of probability fuzzy decision tree based on quality allocation in practical functions. In the algorithm, the actual application of the quality and complexity evaluation of theoretical association is combined with examples, and the discrete characteristics of the algorithm are verified . Liu et The physical health test data is the basic information stored in the national standard data management database, and in the actual measurement, it shows the specific decomposition of each body type, health status, and disease characteristics, so as to show the index under the characteristic test and its checking calculation results. Therefore, after analyzing the comprehensive indicators of the physical health test data in China, it is not difficult to find that the analysis of adolescent physical health test data, as the most important part of the application model research of physical health test data analysis, has a certain level and status in the actual basis. Therefore, when analyzing the application scope of the group's physical health test data, first of all, it is necessary to conduct comprehensive statistics on the group's physical health test indicators, so as to obtain the basic indicators of physical health, as shown in From m as the number and Ci as the category. Among them, i=1,2,\u2026, m, the expected information function in the set S isIn the basic index of physical health and the use process, the intelligent algorithm model of decision tree is substituted to get the characteristic attributes of the index and the potential laws of the data. Taking the generated samples of decision tree as the judgment boundary point, the optimal threshold of each accuracy is calculated, and the high-precision decision tree analysis mode is obtained. The basic information model of decision tree is obtained by the data sorting function of each node. Therefore, the training sample set of the information model is regarded as a whole, with Ri represents a subset of Ci categories in data set S, the number Ci of them is represented by ri, and pi represents the probability of belonging to Ci category after random sample extraction, which conforms to pi=ri \u00f7 |S| rule. If the attribute v is divided into two different subsets of data, and there are classification rules of i=1,2,\u2026, v, then the attribute value aj in the A attribute can be divided into different branches and expressed in the m category group as follows:Wj is the specific gravity value in the subset Sj. Then, this subset class Ci is substituted into the expected information function, and the calculation formula is as follows:In the above formula, I(Sj1+\u22ef+Smj) represents the expected information amount of Ci category in A attribute and Sij/Sj represents the proportion value of Sj subset in Sj category. Therefore, through the above function formula, taking attribute A as a whole, the information gain measure function in the decision classification attribute is calculated as follows:In the above formula, In the above formula, after the whole data set is divided into different subsets, the deviation on variables is particularly obvious. Therefore, after substituting the deviation index, the optimized function is calculated as follows:From the above formula, we can deduce the optimized information gain rate function, that is, the maximum information gain rate function, which aims to complete the determination of the criteria for the partition of sample attributes and make up for the deficiencies and so on, which is expressed as follows:C(i), as the sample miscalculation cost, the function formula of its category isOn the basis of samples, combined with each prediction index, the function value fields under different categories are calculated, where for t is i=j/i \u2260 j. w(i) is used as the weight value in the ui sample category. The function formula of the category isIn the above formula, cos\u2009\u2009m is the number of values, n is the total number of samples, and \u2211k=1mC(k)nk is the sum of the maximum category weight values under the decision tree model if it is equal to n. The maximum benefit function of pi probability is obtained as follows:In the above formula, Therefore, by setting each index coefficient, the index calculation function of the expected information and the category weight value in the above decision tree algorithm model calculates the corresponding optimal index function after substituting the category difference and error index and presents the optimization trend in each index cost weight value field, effectively reducing the probability of error generation, and improving the decision tree in each. The application value field of the system is time-effective.Based on the current situation of the development of the analysis and processing system of the physical health data of various groups in China, this paper makes a comprehensive analysis and on-the-spot investigation on the adolescent groups. Through the calculation of the physical health evaluation model, which is mainly based on the factors of body mass index score, vital capacity single score, and gender difference characteristics, it distinguishes between the actual results and the specific research data. In order to get more and more accurate results of research data, we can build a practical project model with the least error and the highest benefit. In this study, the students from freshmen to seniors of a university in China are taken as the research objects. Firstly, the statistical analysis data of the calculation samples of each grade of the university are calculated, as shown in Therefore, by analyzing the obesity value, gender difference, and other physical health test data that affect the physical health test data, it is found that once the scores of each index are used to repeatedly verify the youth physical health test data under all the decision tree intelligent algorithms, the decision tree intelligent optimal algorithm and its application mainly focus on the research of the application model of physical health test data analysis After that, the error rate comparison will change, as shown below.From To sum up, the results of the actual calculation of the application mode of the analysis of physical health data by all the index elements substituted into the decision tree intelligent algorithm prove that the study of each index element of the analysis and processing system of physical health data by the decision tree intelligent algorithm has a certain significance for reference and presents a better benefit mode in the actual operation process and excellent feasibility.With the continuous improvement of our basic living standards, the control and maintenance of the physical health of various groups in China has become one of the hot spots of public concern and gradually has become the main driving force to promote the development of medical industry in the actual operation process. Therefore, this paper takes the analysis and processing of physical health data as the research object, comprehensively discusses the analysis and processing mode of physical health data, analyzes the application of data analysis of physical health test and the intelligent algorithm model of decision tree, substitutes all index elements in the intelligent algorithm of decision tree, and summarizes that the intelligent algorithm of decision tree is based on the analysis of physical health test data, and through the example analysis to verify the feasibility of the algorithm, taking the student group in the university campus as the research object, it is concluded that there are numerical differences in the body mass index score, gender differences, and other elements of the physical health assessment mode, typical performance is that the control of modeling data and test data in each error rate is basically less than 15%, and in part of the research, the index shows negative value, which shows the effective control and adjustment ability of the algorithm for the error, and summarizes the analysis distribution diagram and the optimal algorithm mode of the decision tree intelligent algorithm model into the research school area. However, this paper only analyzes the impact of decision tree intelligent algorithm on the application of physical health test data from the university student group, lacking the research indicators and effects of other groups, so it will be improved in a more comprehensive research later."} +{"text": "University students' physical health test is an important part of university physical education. The data obtained by the physical health test play an extremely important role in the field of students' physical health research. This paper clarifies the current situation of data collection of the physical health test for college students by sorting out the development status and trend of the physical health test system in China. To further ensure the accuracy and validity of the physical health test data, this article puts forward the corresponding optimization measures for the data collection link of the existing physical health testing system given the problems existing in the implementation of university students' physical health tests. Optimization measures are as follows: (1) add a test data collection type for test video collection; (2) optimize the authentication process to increase face recognition; and (3) enhance posttest test data management by stamping time. The physical health of teenagers has always been the focus of countries. Countries have issued relevant policies and systems to escort the development of teenagers' physical and mental health. The physical health test is an important means to examine the physical health of students, and the test results are also an important indicator to judge the physical health of teenagers . The datThere are some problems in Chinese students' physical health test data, such as waste of data resources and single application path . Most ofChina's student physical health testing system was developed late. FITNESSGRAM, a computer information management system based on fitness test reports, was developed in 1977 in the United States. Its development concept is to realize the whole process of integrating physical health tests into physical education. Therefore, the system attaches great importance to the students' physical health service and feedback on the result after the test . China lHowever, the China's physical health testing system has developed rapidly in recent years. Many scholars pay attention to the cross integration of disciplines and apply advanced technologies in various fields to the physical health testing management system to promote the intelligent development of the system, such as the physical health testing management system designed based on a variety of wireless transmission modules, computer vision technology, and mobile Internet technology \u201310. A vaAt present, there are mainly two types of physical health testing systems in the Chinese market. And there are great differences in data acquisition modules between the two types of systems.A full set of intelligent testing equipment is used for intelligent data collection without manual input. For example, the smart sports management platform was desiAt present, the research focus of the physical health testing system in China mainly includes the collection and management of sports health testing data, and the follow-up health service and feedback are also being further studied. The tentative application of these advanced and innovative research methods has greatly improved the efficiency and safety of students' physical health testing in China.There are many deficiencies in the process of physical health tests for university students, which affect the accuracy, authenticity, and diversity of physical health test data. The application of the university students' physical health testing system can make the data collection of physical health testing more accurate and convenient. However, the system still fails to effectively prevent and solve the problems in the physical health test of university students.Data are not just numbers; they also contain text, images, sounds, and so on. However, the data collection of most of the students' physical health testing systems in the market is limited to the student's test scores, that is, digital collection , 16. OnlNow the physical health test is linked with academic credits, evaluation, and graduation qualification , and stuThe existing data acquisition module of the university students' physical health testing system are generally equipped with student card verification or test card verification device. The test worker can read the identity information before the test. However, the lax management of the testing process and the negligence of identity verification have ledIn 2004, the Ministry of Education of China began to build a large-scale national information system of students' physical health test data to cooperate with the national students' physical health standards. And schools at all levels across the country are required to report their students' physical health test results to the Ministry of Education. So some local education authorities have created incentives to promote physical fitness testing. For example, if the average score of the students' physical health test decreases for two consecutive years or the myopia rate increases for two consecutive years, the evaluation of their performance of educational responsibilities shall be reduced by one grade . To avoiAlthough the existing physical health testing system of university students reduces manual operation as far as possible in the process of data collection, transmission, and uploading to the national database, it does not take effective measures to monitor the data. System operators can still modify the original data at will without any modification record. As a result, the authenticity of the data in the national student physical health standard database is doubtful, which cannot meet the purpose and target of the test.The existing moving video image acquisition methods are divided into plane acquisition, three-dimensional acquisition, high-speed acquisition, multimachine acquisition, and infrared acquisition, and so on. Among them, the plane acquisition is the main way of motion video image acquisition. The plane image acquisition is divided into plane fixed-point focus acquisition, plane fixed-point zoom acquisition, plane fixed-point focus scanning acquisition, plane fixed-point focus moving acquisition, and plane zoom moving acquisition. There arIn sit and reach, standing long jump, vital capacity, sit-ups, and chinning, the movement range of the subject is small. So the plane fixed-point focus acquisition method is used for test video acquisition. The moving video image is carried out through the fixed-position video recording device, and the real-time control of the device and the remote transmission of the moving video image are realized in combination with the wireless multimedia sensor network technology, controller, and terminal equipment. The plane fixed-point and fixed focus acquisition method requires that the camera shooting distance shall not be less than 25\u2009m and the field width shall not be less than 8\u2009m. ThereforThe subjects moved a wide range during the 50-meter race, the 800-meter race, and the 1,000-meter race. Therefore, in track and field competition, the planar fixed focus mobile acquisition method is generally adopted. However, this kind of collection method needs to use slide track technology and install slide track inside or outside the runway. Then, install a video acquisition device on the slide track for video acquisition. However, this method has a high cost, so it is suggested to adopt a multipoint plane fixed focus acquisition method to acquire moving video images under the comprehensive consideration of the testing cost and other factors. Among them, the fixed-point shooting positions of 800\u2009m and 1000\u2009m are the starting point, 200\u2009m curve, and the end point. The three points collect the subjects' starting video, curve running video, and sprint running video, respectively. The 50\u2009m running distance is short and the movement path is a straight line, so it is selected to shoot at the starting point and the end point. Students' starting action videos are collected at the starting point, and students' enroute running action videos are collected at the end point. This method can obtain the motion characteristics of the subjects in different stages, and the video can be used to verify whether the testers participate in the whole test.The acquisition of test videos can meet the needs of the following three aspects: first, the demand for supporting materials during random inspection and review of the physical health test. By watching and checking the test video, we can verify whether the students' test results are true and whether the school test process is qualified. The second is the follow-up management service demand of the physical health test. Rich and diverse physical health test data can support the development of sports guidance, health research, and judgment. Third, science and technology help develop the needs of teenagers' physical health. Video data can provide a data basis for the research and development of intelligent wearable devices, sports apps, sportswear, etc., as shown in Biometric identification has the highest security and reliability among the existing identification technologies. Common biological features are divided into physiological features\u2014face, DNA, fingerprint, iris, etc.; and behavioral features\u2014gait, voice, signature, keystroke habit, etc. Among them, face recognition technology is a noncontact recognition technology. Compared with fingerprint verification, iris authentication, and other recognition technologies, it has the advantages of being fast, simple, high reliability, difficult to counterfeit, low cost, and noncontact. In particular, the active resolution of face recognition technology ensures that others cannot be recognized by the system when using nondynamic picture puppets and wax figures. Therefore, adding facial recognition to the test equipment can not only ensure that the identity of the subject is correct but also avoid the subject using pictures and other deceptive.Face recognition technology collects face pictures or videos for identification and authentication. Identification is to check the collected image and the image in the face database to confirm the identity information. Identity authentication is to check the image with the photo in the ID card to determine whether it is the same person.This article adopts identity authentication, that is, to judge whether two face images belong to the same identity. During the test, the subject swipes the student card to obtain his basic information and then verifies whether the student is the subject through face recognition.Face recognition technology integrates artificial intelligence, machine learning, video image processing, and other professional technologies and is the latest achievement and application of biometric technology. Face recognition is to use a camera to collect the image or video of the face, and automatically detect and track the face in the image, and then compare the detected face image, detection, and a series of related operations. In the physical fitness test, multiple test items are performed outdoors. However, the outdoor light is changeable and the environment is complex. Compared with document photography, the images obtained by face recognition camera in the test are very different in light, human behavior, posture, expression, and other aspects.SeetaFace is characterized by complete code, convenient transplantation, and optimization, which can meet the needs of lightweight face recognition, and it uses the five-point location method in the feature point location module, which greatly reduces the amount of calculation. Therefore, the SeetaFace face recognition engine is selected as the basic algorithm for face recognition. The automatic face recognition system needs three basic modules: face detection module, face feature point positioning module, and face feature extraction and comparison module. However, the small number of registration points will lead to the problem of inaccurate positioning in the case of low resolution or large facial offset angle. But, in physical health tests, subjects are usually fixed in front of data acquisition equipment for identity authentication, so the large facial offset angle can be controlled. At the same time, the resolution of the assembled camera is high enough to solve the resolution problem. Most importantly, the reduction in computation demands less on the processor and therefore costs less. Therefore, this method is suitable for face recognition of the data acquisition module of the physical health testing system.Before the test of a project, the subject first reads the student card information at the information reading place of the test equipment of the project (including the student's ID photo). The face recognition camera will get the image, and the student card information of the personal ID photo will be compared to confirm whether it is the same person. Finally, according to the different certification results to decide whether continue the test, the specific process is shown in At present, the data collection of the physical health testing system has been automated. And it is impossible to modify a large number of data in this link. However, it is still possible to modify the data manually in the data upload and data reporting stage after data collection. Therefore, the tamper-proof of physical health data needs to start from the source and take corresponding measures in the stage of data collection and uploading to ensure the primitiveness of data. There are many tamper-proof technologies for electronic data: document solidification technology, digital signature technology, trusted timestamp technology, blockchain technology, and so on.The physical health test of students covers a wide range, and the amount of data obtained is huge. In addition, the authenticity and validity of test data plays an extremely important role in promoting students' physical health in China, so it is necessary to choose an economical and effective way to ensure the authenticity and originality of test data. The technology has strong operability and can effectively play a role in data security protection.Timestamp technology is a technology that uses the hash algorithm and asymmetric encryption algorithm to verify the originality and authenticity of data with the help of the time proof of the third-party timestamp mechanism. It has strong operability and can effectively protect the data. Timestamp technology has been applied in many fields, such as archive data management, traffic law enforcement, accident handling, food traceability, criminal investigation of public security organs, and so on. The generation of the timestamp is realized by three parties, namely, the users, national timing center, and time stamp organizations, so its reliability is strongly guaranteed. And because it can track the whole process of data, it can ensure the validity and reliability of data.Using this technology in the physical health testing system can control the source of testing data and the whole process of testing data development. Users can track the query, modify, delete, and use records of test data to realize the whole process monitoring of test data. Therefore, data tampering can be prevented and the reliability of test data is guaranteed.It is necessary to select the appropriate time for stamping the time stamp, as early or late stamping can adversely affect test data. Students' physical health test items are diverse, the number of test people is large, and the test data generated is huge. If each project's test data or each person's test data are time-stamped, more data will be generated and too much memory will be occupied and resources will be wasted. Therefore, after the physical health test was completed on the day, the supervisors of the third-party supervision organization stamped the physical health test data. General physical fitness testing will continue for several days. The timestamp at the end of each test can ensure the reliability of data and avoid the data burden caused by too many time stamps, as shown in The timestamp technology selected in this article has high requirements on the third-party supervisory organization, because the time stamp needs to be strictly implemented and grasped by the third-party supervision organizations. But at present, the independence of the third-party supervisory organization in our country is not enough and the laws and regulations are absent . It needThe data storage problem is unresolved. Video data take up more memory than pure digital data, so reducing memory consumption is also extremely important in system development. In this respect, relevant professionals are needed to further optimize.This article puts forward some problems existing in the physical health testing system of university students and puts forward optimization countermeasures for these problems, but there are still deficiencies in some areas, which are given as follows:The intelligent student's physical health testing system is the product of the progress of times and also the development direction of the student physical health testing system in the future. Providing scientific and comprehensive health guidance for students, promoting the development of lifelong physical education, and finally realizing the wisdom of physical health management are the ultimate goals of the physical health testing system. Accurate and true physical health test data can not only reflect the real physical condition of current Chinese college students and provide a solid data basis for the formulation of national policy documents, but also promote the development of follow-up service modules of the physical health test system and guide students to do physical exercise more accurately and scientifically. Therefore, this study starts by optimizing the data acquisition module of the physical health testing system to achieve a more accurate and comprehensive acquisition of students' physical health data, provide perfect information support for the follow-up service of physical health tests, and promote the healthy development of students' physical health."} +{"text": "In protein aggregation disorders, we assume that, during the process of protein aggregation, different types of aggregated species are formed, some of which can be toxic to cells/tissues/organs. Recent evidence from numerous studies in cell and animal models of disease suggest that oligomeric species of different proteins might be more toxic that the larger, fibrillar forms. However, we still lack definitive data on the nature of the toxic species, mostly due to our inability to detect and define the various protein species that form as protein aggregate. The terms used are often broad and do not capture inter-laboratory variation in protocols and methods used for the characterization of aggregates. Even antibody-based methods can be ambiguous, as antibodies are delicate tools. Therefore, systematic and interdisciplinary studies are essential in order to guide future developments in the field. Neurobiology of Disease, Kumar and colleagues conducted an impressive amount of careful and rigorous work on a topic of great relevance in the field of synucleinopathies\u2014the study of different forms and assemblies of alpha-synuclein (aSyn), a protein deeply implicated in these diseases [In a study recently published in diseases .We were already used to a level of rigor that is characteristic from this research group and, this time, there is no surprise either. The study is systematic, carefully planned and executed, and brings about important knowledge that, if nothing else, highlight the need for uniformization in terms of protocols and language used in the field.Even if they may be less toxic , I find The Kumar et al. study took brute force and assessed the behavior of a panel of 18 antibodies in the context of in vitro-prepared aSyn species: monomers, oligomers, and fibrils. The authors used a variety of antibody-based techniques, including immunoblot analyses and ELISA, and surface plasmon resonance (SPR), and found that, at the concentrations of aSyn tested, the antibodies lacked the specificity that one would expect based on the literature.The findings may appear surprising to some but, in fact, I believe they were to be expected, as comparing results obtained in different laboratories is often difficult due protocol differences. For example, the antibodies tested were developed using aSyn species produced in other laboratories, using different methods and, therefore, the results now obtained may not be directly comparable to those previously reported. In addition, and as the authors point out, a major limitation is to know how the aSyn species used as reference in the study relate to those accumulating in the human brain\u2014one may expect them to be very different in fact, due to the absence of the posttranslational modifications that take place in any biological context, and due to absence of protein interactors in an in vitro system. The recent study by Schweighauser et al., using material derived from the brains of individuals with dementia with Lewy bodies or multiple system atrophy, demonstrates the formation of distinct types of assemblies in different diseases . MoreoveThe study by Kumar et al. has merit in as much it highlights the fact that we need to be cautious when assuming the specificity of the numerous antibodies used in the field, including those commercially available, but one should also recognize the antibodies tested have been widely used by many expert groups and shown to be valuable tools. At any rate, perhaps the most striking message of the Kumar et al. study is to highlight the need for more precise guidelines and standardization in the field, so that the community can actually compare results and address important outstanding questions, such as the eternal question of what the toxic protein species is/are. This requires the community to work together, as this is the only way we might move forward and rationally develop tools and strategies for therapeutic intervention in these devastating diseases."} +{"text": "X. We show that the solution to this problem is expressed in terms of a bathtub principle that holds out those samples with the lowest local accuracy up to an X-dependent threshold. To illustrate the usefulness of this analysis, we apply it to a multiplex, saliva-based SARS-CoV-2 antibody assay and demonstrate up to a 30 % reduction in the number of indeterminate samples relative to more traditional approaches.In diagnostic testing, establishing an indeterminate class is an effective way to identify samples that cannot be accurately classified. However, such approaches also make testing less efficient and must be balanced against overall assay performance. We address this problem by reformulating data classification in terms of a constrained optimization problem that (i) minimizes the probability of labeling samples as indeterminate while (ii) ensuring that the remaining ones are classified with an average target accuracy But theindeterminate class for which one cannot draw meaningful conclusions, although this is not always chosen to be near a cutoff minimizes the fraction of indeterminate samples while (II) correctly identifying the remaining ones with a minimum average accuracy per se are not fundamental quantities of interest in our analysis. As discussed in Rather, we demonstrate that it is more useful to define accuracy as a prevalence-weighted, convex combination of specificity and sensitivity, since this naturally interpolates between the aforementioned degenerate cases. This choice also highlights an important (but often-ignored) fact: optimal classification domains, sensitivity, and specificity all change with prevalence. Thus, they are not static metrics of the assay performance in a setting where a disease is actively spreading. For more in-depth discussion, we refer the reader to Ref. P(r) and N(r) of a measurement outcome r \u2013 i.e. a local property \u2013 for (known) positive and negative samples. As shown in Ref. and N(r) also directly define the local accuracy Z(r), and that its global counterpart X is the average value of Z(r). We next observe that the boundary given by Z = 50 %, its lowest possible value. The corresponding points are the first to be held out, since they contribute most to the average error.3 Moreover, one sees that systematically removing the least accurate r yields the fastest increase in the global accuracy for the remaining points. The bathtub principle formalizes this idea.We also emphasize that the concept of classification accuracy has both a in Ref. , these PP(r) and N(r), so that the classification and holdout problems are reduced to mathematical modeling. This is also the key limitation of our approach insofar as such models are necessarily subjective. However, this problem is not unique to our method. Where possible, we incorporate objective information about the measurement process. See From a practical standpoint, the main inputs to our analysis are training data associated with positive and negative samples; thus our approach is compatible with virtually any antibody assay. These data are used to construct the conditional PDFs The remainder of this manuscript is organized as follows. 2By a set, we mean a collection of objects, e.g. measurements or measurement values. By a domain, we typically mean a set in some continuous measurement space; see, e.g., r \u2208 A means that r is in set A.The symbol \u2208 indicates set inclusion. That is, The symbol \u2205 denotes the empty set, which has no elements.C = A\u222aB is the set containing all elements that appear in either A or B.The operator \u222a denotes the union of two sets. That is, C = A\u2229B is the set of elements shared by both A and B.The operator \u2229 denotes the intersection of two sets. That is, C = A/B to mean the set of all objects in A that are not also in B. Note that in general, A/B \u2260 B/A. Equivalently, A/B can be interpreted as the \u201csubtraction\u201d or removal from A of the elements it shares in common with B.The operator / denotes the set difference. We write A = {r : *} defines the set A as the collection of r satisfying condition *.The notation Our analysis is grounded in measure theory and set theory. We review relevant concepts here. Readers well-versed in these topics may skip this section.Unless otherwise specified, the \u201csize\u201d or measure of a set refers to the probability of a sample falling within that set, i.e. its probability mass. By the same token, we generally avoid using size to describe the actual dimensions (in measurement space) of a domain. Throughout we also distinguish between training data and test data. The former is used to construct probability models, whereas the latter is the object to which the resulting classification test is applied.3r, which can be a vector associated with multiple distinct antibody targets. We take the set of all admissible measurements to be \u03a9. Our goal is to define three domains, h for \u201chold-out\u201d) samples. In particular, we say that a test sampler is positive if it falls inside We begin with the mathematical setting underlying classification. Consider an antibody measurement P(r) and N(r) are conditional probabilities associated with positive and negative samples, define the measures of a set S \u2282 \u03a9 with respect to P and N to be\u03bcP(S) is the probability of a positive sample falling in S, etc. We then require thatS \u2260 S\u2032, for S, S\u2032 chosen from We require that these domains have several basic properties to ensure that they define a valid classification scheme. Recalling that p is the prevalence. [See Ref. (p without needing to classify.] The terms on the right-hand side (RHS) are the rates of false positives and false negatives. Importantly, indeterminates are not treated as errors in not the error rate of the assay restricted to samples that fall only within Within this context, we define the total error rate to beSee Ref. for an u4p. While 5 We also note an important corollary that when the Zp and Zn are an arbitrary partition of In Ref. we showe)N(r)}DN\u22c6={r:N(r)}DN\u22c6={r:(X and that Q(r) = pP(r)+(1\u2212p)N(r) is the probability of a test sample taking a value r, subject to the constraint thatIn the present work, we assume that there is a desired average accuracy DP and DN cover the whole set \u03a9 up to sets of measure zero; moreover, let Z0(X) is the solution to the equationTo solve this problem, it is useful to introduce several auxiliary concepts. In particular, define the local accuracy of the unconstrained (i.e. no indeterminate), binary classification to beP\u22c6/Dh\u22c6DN\u22c6=DN\u22c6/Z0, which depends on X. X. By virtue of the fact that Z0(X) on the indeterminate local accuracy is the lower bound on the accuracy for sets that can be classified. The Z0(X) has non-zero probability mass. In this case, not all of these points need to be held out if doing so would make X. The choice of which points to make indeterminate then becomes subjective as they all have the same local accuracy. In practice (e.g. for smooth PDFs), Q, so that we can ignore it in Z0(X) is the key step in defining the optimal classification domains. Fortunately, the interpretation afforded by Z\u2605(r) \u2264 1. Let \u03b60 = 3/4 be an initial guess for the value of Z0(X), and let \u03b6j be the jth update computed iteratively as follows. For each \u03b6j compute \u03b6j+1 = \u03b6j \u2212 2j+3)\u2212\u2212 = \u03b6j + \u03f5Z, where \u03f5Z \u2264 2M+3\u2212 is the error in the estimate of Z0(X). For context, 20 iterations of this algorithm yields errors \u03f5Z on the order of 1 in 107. In the second case, the existence of a non-trivial set \u03b6 is greater than or less than Z0(X). In this case, the set Z0(X) is identified to sufficient accuracy.From 4d via log2[d + 2] \u2212 1, which corresponds representing the data in terms of bits. Empirically we also find that this transformation better separates positive and negative populations. Total IgG values are then rescaled to the domain by dividing each measurement by the maximum. SARS-CoV-2 measurements are similarly rescaled to the domain , although we divide the log-transformed data by 7, since there were no samples with saturated values. After transformation, each sample is represented by a two-dimensional vector r = , where x is the normalized total IgG value, and y is the normalized SARS-CoV-2 counterpart.To illustrate the analysis of 6 The goal of the analysis is to maintain accuracy while decreasing the number of indeterminate samples by finding the domain probability mass. We remind the reader that size does not refer to the volume in measurement space. Rather it refers to the fraction of samples expected to fall within the domain, since this is what controls the number of indeterminate samples. Thus, it is possible that The results of this transformation are shown in x \u2192 \u2212\u221e or (ii) SARS-CoV-2 antibody levels will decouple from total antibody levels when the latter is excessively high, e.g. if an individual has been exposed to a large number of different pathogens. We also recognize that the ELISA instrument only reports numerical values on the domain . Thus, fluorescence levels above xmax are rounded down to the upper bound, and levels below xmin are rounded up to the lower bound. As shown in x = xmin and x = xmax. While details are reserved for the x \u2264 1, 0 \u2264 y < 1, \u03b4(x) is the Dirac delta function, and l) and right (r) bounds. We emphasize that the use of delta functions in To motivate our probability models, we consider the phenomena that could affect measurements. In particular, we anticipate that for positive samples, there should be a degree of correlation between total IgG and SARS-CoV-2 specific antibodies. However, at extreme total IgG values, the SARS-CoV-2 levels may become independent as (i) all measurements will revert to noise when Within the domain 0 < x < 1 and 0 \u2264 y \u2264 \u221e, we assume that the SARS-CoV-2 measurements are well described by a Gamma distribution with a fixed (but unknown) scale factor and shape parameter with a sigmoidal dependence on x. This dependence is motivated by the correlation described previously. Taken together, this yields the PDF\u03bc, \u03c3, \u03b8, and the \u03b1j are to-be-determined. The boundary functions are defined to bex = 0 (x = 1) will be mapped back to the lower (upper) instrument bound. The free parameters are determined via maximum likelihood estimation using a censoring-based technique; see the y-domain to be 0 \u2264 y \u2264 1 and renormalize the resulting PDF on this domain.To model the function N, we anticipate that non-specific binding of the total IgG antibodies to the SARS-CoV-2 antigens will lead to a degree of correlation, albeit to a less extent than for positives. Thus, we use the same form of P, but refit the parameters using the negative training data. N are continuous with respect to the Gamma portion of P and N, the former can be inferred from the contour lines in the figure and are thus not shown.For the negative PDF Z\u2605(r) and waterlines necessary to achieve different average accuracies. The bathtub principle is shown in the latter; see also Ref. (Z0(X). Note that indeterminates are concentrated in regions where there is significant overlap between positive and negative samples. lso Ref. for rela5formally define a \u201cpoint-swap derivative\u201d to beZ(r) can be an arbitrary definition of local accuracy, although in practice we take Z(r) = Z\u2605(r) in this section. The interpretation of r\u2032 from r, we must ensure that the constraint Z(r) \u2212 X < Z(r\u2032) \u2212 X < 0, then adding r to infinitesimally decrease the global accuracy, so that we must hold out a larger yet still infinitesimal fraction of Q in the vicinity of r\u2032. It is clear that Z(r\u2032) \u2192 X and becomes negative for Z(r\u2032) > X and Z(r) < X. The interpretation of this is straightforward: we should always reverse any swap for which a point with local accuracy greater than the average is put in the indeterminate class. Such points are not considered in the analysis below. More rigorous interpretations of To validate that the sets Z(r) = Z\u2605(r). Note that swapping any point in the indeterminate region with one in the positive and negative classification domains increases the size of the indeterminate, as expected.The benefit of Z(r) directly. In particular, the Z\u2605(r) \u2265 1/2 for all To validate that swapping points between 66.1X; that isExamination of (r)dr,Sp=[\u222bDN changes, but rather that the relative fraction of positives and negatives differs on Z, which depends on the specifics of the probability models. Mathematically, we understand these observations by rewriting nQ. Thus, we see that the constraint corresponds to a domain-restricted-prevalence weighted sum of sensitivity and specificity.The resolution to this conundrum is to recognize that the p. Further implications of this observation are explored in the next section.From a theoretical standpoint, the relative fraction of positives from an assay using indeterminates is not a reliable estimator of total prevalence. In order for the restricted prevalence p, one requiresHowever, an immediate practical consequence of an unbiased estimate of the total prevalence can be constructed without classifying samples using a simple counting exercise on subdomains of \u03a9. The validity of that method is independent of the assay accuracy, so that it can be used to estimate p in the present work. Indeed, such techniques are necessary to construct the optimal classification domains, given the fundamental role of p in their definitions. We refer the reader to Ref. , where Zn and Zp apply only to samples in the negative and positive classification domains. X = 0.99 but required that the empirical specificity be 100 % for the training set. To accomplish this, we set Zp = 0.972, which augments the size of the indeterminate domain without decreasing the number of true negatives.A possible solution to this problem is to recast methods . First, methods and chec6.3i.e. implying existence of a population, does not enter; rather all that is needed is a choice of the classification domains. Thus, an assay can have exceptional sensitivity and yet still be wrong half the time if the prevalence is low. In a related vein, it is clear that specificity and sensitivity only characterize assay accuracy in the limits p \u2192 0 and p \u2192 1, respectively.Se and Sp lose their status as the key performance metrics that define the \u201cquality\u201d of an assay, and they cannot be viewed as static properties. Such observations are not to say that Se and Sp are useless, however. Clearly there are times when it is more important to correctly identify samples from one class, and this motivates the generalization of Here we encourage a new perspective. As a baseline strategy, the most important task is to correctly classify samples; at least this is of the utmost importance to patients. Moreover, computing accurate prevalence estimates is critical for epidemiologists . With this goal in mind, the sensitivity and specificity are subservient to accuracy via X; and (ii) the prevalence-weighted sensitivity and specificity must be X. The equivalence of these interpretations arises from the fact that notions of accuracy assume the existence of a population to which the test is applied. Thus, But these observations clarify our perspective of why the prevalence sets a natural scale for classification. In particular, The benefit of treating prevalence-weighting as a natural framework for diagnostic classification is that one can easily identify when subjective elements (i.e. not intrinsic to the population) have been added to the analysis. For example, the indeterminate domain in Ultimately the choice of classification method is best determined by assay developers, and there may be situations in which prevalence weighting is inappropriate. Nonetheless, we feel that the analysis herein highlights the assumptions behind our work and attempts to ground it in objective elements inherent to the population of interest.6.4A fundamental limitation of our analysis is the assumption that the probabilistic models describing positive and negative samples can be used outside the scope of training data. This problem is common to virtually any classification scheme and is primarily an issue of modeling. Such issues have been explored in a previous manuscript, to which we refer the reader . We noteRegarding the indeterminate analysis, A practical limitation of our analysis is the definition of assay performance, provided we allow for variable, prevalence-dependent classification domains. Current standards advocate using sensitivity and specificity estimated for a single validation population having a fixed prevalence. To realize the full potential of our analysis, it is necessary to (i) estimate assay accuracy and uncertainty therein, (ii) characterize the admissible classification domains, and (iii) compute sensitivities and specificities, all as a function of the variable prevalence. While such issues have been partly considered in , and dee"} +{"text": "Salmonella enterica bacteriophage P22 is one of the most promising models for the development of virus-like particle (VLP) nanocages. It possesses an icosahedral T = 7 capsid, assembled by the combination of two structural proteins: the coat protein (gp5) and the scaffold protein (gp8). The P22 capsid has the remarkable capability of undergoing structural transition into three morphologies with differing diameters and wall-pore sizes. These varied morphologies can be explored for the design of nanoplatforms, such as for the development of cargo internalization strategies. The capsid proteic nature allows for the extensive modification of its structure, enabling the addition of non-native structures to alter the VLP properties or confer them to diverse ends. Various molecules were added to the P22 VLP through genetic, chemical, and other means to both the capsid and the scaffold protein, permitting the encapsulation or the presentation of cargo. This allows the particle to be exploited for numerous purposes\u2014for example, as a nanocarrier, nanoreactor, and vaccine model, among other applications. Therefore, the present review intends to give an overview of the literature on this amazing particle.The Recent years have been marked by a growing interest in the field of nanotechnology, especially its applications in health science. This has been due to the development of new technologies on a nanometric scale, an increase in the demand for alternative or more efficient methods in the treatment of diseases such as cancer, or the need to contain growing threats such as the proliferation of antibiotic-resistant bacteria ,3,4,5,6.Nanoparticles can be applied to a great number of functions: because they can act on the healing and regeneration of damaged tissue; as nanometric reaction centers; as the scaffold for the synthesis of organic and inorganic structures; and as carriers for the controlled delivery and release of therapeutic agents, where this last application is one of the most prominent ,5,6,7,8.The vast range of materials that have already been used to design different types of nanoparticles is one of the most prominent virtues of these systems, among which include particles based on lipids, polymers, metals, and proteins. All of them have the potential to promote safe and effective drug transportation through the loading of cargo with therapeutic properties, but their properties significantly vary, where each type has advantages and issues ,4,5,7,8.Among the developed protein cages, the most prominent are the virus-like particles (VLP) ,11. ThesAmong the different VLP models, one of the most promising is the VLP P22 derived from the bacteriophage P22. Over the last decade, the interest in and the research on this particle have been considerably growing, with a great many numbers of papers published demonstrating the ample capacity to modify its structure and the plethora of cargoes that have already been loaded in its interior. This paper aims to perform a nonexhaustive review of the research so far conducted on the VLP P22 as a nanoparticle, paying special attention to its application as a nanocarrier, while also offering a brief record of the history of the research of this exceptional structure and possible routes that the research can take.The interest in P22 bacteriophage studies has increased since its discovery in 1952. In order to analyze the growth of the phage literature, bibliographic research was conducted by using the Web of Science platform. First, we searched for articles and papers that contained the terms \u201cBacteriophage P22\u201d or \u201cPhage P22\u201d A, to takSalmonella enterica sorovar typhimurium, first described in 1952 by Zinder and Ledeberg . In. In16]. solution . One drasolution ,66. Howesolution ,65,66.In a subsequent study, it was possible to simultaneously encapsulate two cargo proteins by using this SP-mediated approach . A triplCargo can have remarkable impacts on the physical properties of the capsid, as seen in a study by Llaur\u00f3 et al. . In it, Another important aspect associated with the interaction between cargo and particle is the capability of the former to exit its interior. For the development of an efficient delivery system, an optimal cargo release strategy must be utilized. Thus, the evaluation of the cargo\u2019s effect and influence on both the SP and the whole particle becomes imperative. The elucidation of the mechanisms that act in this dynamic allow the development of strategies that promote an enhanced release of cargo maximizing the potential of VLP P22 as a nanocarrier. A study evaluated the cargo exit behavior over time with different traits and under different conditions, such as VLP P22 morphology, encapsulated molecule size, varying temperatures and ionic strengths, and SP length . This stIn addition to modifying its interior, the plasticity of the P22 capsid allows for extensive modifications to its exterior surface, allowing its use for multiple ends. Similar to what was previously mentioned for other VLPs, VLP P22 can be decorated with a remarkable variety of structures through an equally diverse number of approaches of chemical crosslinking or genetic alteration to the capsid structure ,51,56,81Among the different types of architecture manipulation, the genetic approaches are particularly preferable thanks to their requiring few steps and their resulting in a higher homogeneity in the particle population. The mutations are carried out in regions that do not harm the VLP structure and simultaneously offer some advantage to its application ,81. The 3) \u03b1-helix, or the negatively charged E-coil (+TS- (VAALEKE)3) \u03b1-helix, because when mixed together, these peptides form a heterodimer. It was observed that particles containing one of the peptides interacted with particles containing the other almost immediately after mixing, as seen by the increase in light scattering observed by UV spectroscopy, indicating the formation of a larger structure . T. T85]. TOne of the most interesting methods for the release of cargo proposed the triggering of particle disassembly for the release of cargo in physiological pH under a defined and controllable stimulus, which did not rely on environmental conditions ,77. The One rising use for VLPs is as vaccine models, which have presented some advantages over the other vaccines ,114). ThThe hollow interior of the VLP creates a very restrained space that can be explored for the study and development of contained reactions, especially enzymatic ones. The sturdy protection that the VLP structure confers to cargo is particularly interesting for the delivery of enzymes thanks to the fact that its functionality is dependent on the proper folding of its structure. Furthermore, the confined space provided by the capsid can be explored to study the impact of molecular crowding in enzyme activity. Thus, the P22 VLP has been extensively explored for enzyme transport as a nanocarrier, releasing the enzymes in the environment, or as a nanoreactor, when the reaction occurs inside the VLP. The high number of confined enzymes can increase the velocity of reactions. This construct can be used in medicine, such as for diseases that originated from the lack of enzymatic activity, cancer treatment, and antimicrobial applications ,70,71,72Bacillus megaterium that was encapsulated through genetic fusion to the scaffold protein (SP), can act in two ways: first by the production of ROS inside the tumor cells and second by the activation of the tamoxifen, a classic prodrug used in some types of cancer [The previously mentioned cytochrome P450 (CYP), an enzyme from f cancer ,71. S\u00e1ncf cancer managed f cancer . When usQazi et al. encapsulPasteurella multocida bifunctional glutathione full synthetases (GshFs). GshFs is an enzyme responsible for synthesizing glutathione (GSH), a protein responsible for detoxifying metabolites, including ROS and strong electrophiles in the liver. Given that P22 has previously shown predominant accumulation in the liver [Wang et al. construche liver , it was he liver ,70, the he liver .Although the total catalytic activity of the nanoreactor made of P22 VLP is higher than that of the free enzymes, this can be further improved by obtaining the best of the enzyme\u2019s functionality. The individual enzyme activity reduction reported before may be related to factors such as (1) substrate diffusion limitation, which depends on the pores\u2019 size and how the molecules are distributed inside the VLP; (2) enzyme confinement, which increases the interactions and limits the flexibility of enzymes once enzyme freedom is important to catalysis; and (3) changing the activation site, which can be affected by the crowding inside the VLP or by the genetic fusion to the SP as it affects the isoelectric point of the enzyme ,70,71. T2, to construct a bifurcated pathway catalyst system. Both were functionalized with N-chlorosuccinimide (NCS), which has a highly reactive N-Cl bond, used to crosslink with amine groups. The results showed that the VLP kept the catalysts in proximity, which enhanced the electron transfer needed for catalysis. The final system had a catalytic turnover higher than that of free catalysts, and it was independent of bulk concentration because the concentration inside the VLP remains constant. Furthermore, the pathway of reaction can be controlled; i.e., it is possible to induce the production of NADH or the production of H2 by varying the pH and the ratio between Eosin-Y and cobaloxime [In addition to enzymes, the VLP can be a nanoreactor using an inorganic catalyst, and its bioavailable but resistant shell makes it applicable for biological systems. Edwards et al. encapsulbaloxime .2O3 NP) and yet improved the packaging by using a polyanionic peptide (ELEAE) fused to a truncated SP (239\u2013303), which mimics the ferritin protein. It attracts the iron ions to the interior of the VLP, and it increases the homogeneity of NPs, resulting in spherical and highly monodisperse (41 \u00b1 5 nm) NPs [Synthesizing nanomaterials is a great challenge to material science owing to their high reactivity, which promotes aggregation and the loss of properties. The P22 VLP can act as a platform to synthesize these materials in an easy and controlled way. VLPs can trap inorganic cargoes by simple diffusion, exploiting their affinity for the cargo. Attaching auxiliary molecules on the SP and CP proteins, e.g., crosslinkers, can even more expand the variety of cargoes that can be encapsulated. Reichhardt et al. showed t nm) NPs . In anot nm) NPs synthesi nm) NPs .Aside from particles, polymers were already synthesized inside the P22 VLP, as mentioned previously. For that, it is common to use a cysteine residue mutant of P22 without the SP to bind cysteine-reactive initiators. The sites have to be wisely chosen once the polymer access to the exterior surface can interfere in particle stability and promote interparticle connections, causing precipitation ). The K1T = 206 mM), the repulsion and attraction forces were equilibrated, producing an organized and well-spaced, crystalline, and face-centered cubic (FCC) structure. The optical microscopy of the P22-E2 superlattices revealed the presence of particulates with sizes in the range of 1\u201310 \u03bcm. One of the interesting uses for such a structure is as an efficient two-step catalyst, connecting VLPs with different enzymes that work together [The VLPs can also be used as building blocks to form a three-dimensional active superlattice, which is a new research field that has been on the rise ,90,120. together through together . By usintogether . In a hiThe bacteriophage P22 has a remarkable research history, which began in the 1950s and whicThroughout this review, the many virtues of the usage of the P22 capsid as a VLP were presented: from its efficient method of synthesis through heterologous expression to the sHowever, another promising realm of application that has yet to be better explored is the P22 VLP\u2019s immunogenic potential ,117, par"} +{"text": "According to the literature, educational technologies present several learning benefits to promote online education. However, there are several associated challenges, and some studies illustrate the limitations in elaborating educational technologies, called Design limitations. This aspect is responsible for unleashing various issues in the learning process, such as gender inequality, creating adverse effects on cognitive, motivational, and behavioral mediators, which opposes the fifth UN\u2019s Sustainable Development Goal. Therefore, many studies notice the harmful effects of stereotypes in educational technologies. These effects can be included in the design, like colors or other stereotyped elements, or how the activity is conducted. Based on this, the present study aimed to verify the predominance of color bias in educational technologies available on the WEB. This study developed a computational solution to calculate male and female color bias in the available educational technology web pages. The results suggest the prevalence of the development of educational technologies with a male color bias, with an imbalance among genders, without adequate customization for age groups. Furthermore, some environments, such as Computer Science, present a higher color bias for men when compared to women. Despite both scales being independent, results indicated interesting evidence of a substantial prevalence of colors associated with the male scale. According to the literature, this may be associated with dropout and lack of interest in female students, especially in sciences, technology, engineering, and mathematics domains. Studies debated various features contained in educational technologies, including benefits, challenges, and strategies of online education are considered in educational technologies. The textual analysis depends on specific language nuances in educational technology design?What is the color preference (color-bias) present in educational technologies design according to the teaching subjects (context)?What is the color preference (color-bias) concerning the colors present in the design according to the types of educational technologies?What is the color preference (color-bias) present in educational technologies design according to the age range of the target group?Motivated by the adverse effects of stereotype threat in educational technologies, this study aimed to verify the existence of prevalence in the level of color preferences (a.k.a. color bias) in educational technologies. Additionally, this study aimed to present how color design is used, considering specific aspects such as the type of technology, context, and target audience, regarding gender and age. Given the availability of information on the web, we chose to focus on four types of educational technologies: (i) CMS\u2014content management systems; (ii) RLE\u2014remote learning environments ; (iii) Gamified Environments; and lastly (iv) MOOCs\u2014Massive open online courses, used as teaching technologies of seven teaching subjects: (1) Business, (2) Computer Science, (3) Languages, (4) Math, (5) Multidisciplinary, (6) Programming and (7) Sciences. In order to evaluate the color bias in educational technologies and the prevalence of color preferences, the following research questions were formulated: The gender category was divided into male and female only.This article is organized in the following manner: section two describes the theoretical framework and the related studies, presenting stereotype threats, the metrics used, and the gamified educational settings of this study. Section three presents the proposal and describes the tools used in this study. Section four presents and discusses the results. Lastly, in section five, the study conclusions are addressed.The following section presents a brief literature review with the main concepts and theories adopted as a basis for the present study.Stereotype, in its conceptualization, has a Greek origin which means . The concept was used to represent a form of impression manufactured in metallic parts for the production of books during the 18th century LUies Fig. between ies Fig. . Furtheries Fig. , Yeung aies Fig. , the colies Fig. , Kuo et ies Fig. also preies Fig. , presenties Fig. , where wEncase and anonymity of technology links: The algorithm receives as input a file called \u2019urls.txt\u2019 containing links to educational technologies. Afterward, it applies a hashing function to encrypt the access link. Given this, the algorithm creates a new spreadsheet (dictionary.csv) with a list of URLs with encrypted data to organize the samples that will be collected in the next step;Collection of pages links: The algorithm accesses the spreadsheet file, accessing links of educational technologies homepages, retrieving all the pages contained in that technology and that have access permission (more details in section \u201cEthics on data collection procedure\u201d), then, a new spreadsheet file (pages.csv) is created containing the pages associated with the educational technology being processed;Pages screenshot: the algorithm access the \u201cpages.csv\u201d spreadsheet file scanning page by page, taking a screenshot, and saving it;Pixels collection and normalization: The algorithm randomly scans each of the screenshot images, collecting a total of 3000 colored pixels above the white color tone. White-colored pages were discarded by the tool for further analysis. Nonetheless, these pages were recorded in a file (\u2019whitepageslist.txt\u2019). In order to guarantee the average of the colors in the Red-Green-Blue (RGB) pattern, the algorithm applied pixel normalization to colored/non-white pages. The RGB model was chosen as a standard broadly used, and due to its compatibility with all color systems adopted for educational technologies\u2019 development . The file follows a structure of which agents and which pages can get accessed. Generally, an asterisk indicates that any computer agent (robot) will not be able to consult or access the respective page, which was listed in the body of the file. Some specifications allow robots to access certain content, such as Facebook or Twitter agents that can have access to profile content.Pages like users, profiles, products, buy, and about/personal have access restrictions for any agent. However, pages such as \u201cindex\u201d or \u201cabout\u201d may have granted access to robots.txt example files. Figure The literature concerned with such ethical concepts follows this convention (robots.txt or meta-tags) from web data mining for open linked data used in this study was developed in three stages: (i) web mining; (ii) ethical mining; and (iii) data collection Fig. Web mininThis study conducted a manual search for educational technologies between August and September 2021. A total of 88 technologies were considered, indexed each by its respective access link Tables . HoweverBesides the access links for these educational technologies, other information was also extracted manually, such as type of technology, teaching subject, users\u2019 numbers, and age. This data was available either on \u201cabout us\u201d links or in available reports by the educational technology itself. Therefore, it was possible to map four types of technologies manually: (i) CMS\u2014content management systems; (ii) RLE\u2014remote learning environments ; (iii) Gamified Environments; and lastly, (iv) Massive open online courses (MOOCs), divided in seven themes . Moreover, the ages according to the target audience that was informed by the technologies. This primary data analysis revealed a total \u201cimpact\u201d of 2,494,082,054 users (registered students) in these educational technologies.In order to understand the data in general terms and describe general statistical analysis, the data was divided into two strands Table . The firTherefore, by observing data description and characteristics, this study opted for robust statistical methods to analyze the results. This is due to the large number of issues reported by the literature . It is also vital to note that in this analysis, the computer science and business contexts had only one technology integrating the group. In contrast, most technologies tend to diverse contexts, mainly towards independent learning of a discipline or course. Regarding the languages context, technologies that focused on teaching languages speech or writing as mechanisms for literacy were considered. When referring to STEMcomputer science showed a high male bias level. Moreover, this differed from the programming context because the specialty of the technology is turned towards disciplines composing computer science, whereas programming is only centered around the art of programming.The context class was elaborated, considering the activities and courses the technology in question offers. It is relevant to point out that a technology belonging to impact, as expected, technologies of multiple subjects technologies presented the highest number of users. Nevertheless, an intriguing fact is that even when adding educational technologies of STEM focus, despite constituting a representative majority when compared to languages, the impact provided by STEM was inferior, summing 6.372%, with a difference of almost 20% between these contexts. Such an effect can suggest a considerably low demand for courses in this category.While observing the gamified environment type possessed the highest representativeness, with a total of 49 (63%) out of the 73 educational technologies. Furthermore, it was the group of technologies that presented the higher impact. One possible explanation may be that gamified technologies have become more prominent in recent years due to game elements and characteristics, which aggregate engagement and playfulness in the learning process.The technologies belonging to the computer science and business contexts. However, an opposite correlation is noted in behavior between female and male preference scales. In most cases, the mean values of the female and male scales tend to be presented in the opposite direction. In the sciences context, it is observed a mean of higher values for the female scale, whereas, for the male scale, there is mild evidence that it is the contextual modality with the lowest mean.The descriptive data helps to understand the gender-based differences related to preference level by context and reveals differences and variations among male and female color scales Fig. . It is i6\u201317 boxplot, which despite having a minimum value and first quartile lower than the remaining values, the correlated boxplot in the female scale does not present an opposite effect, differing from the behavior observed in the variation of scale levels by context.Figure cms type possesses a higher variability for the levels in the female scale. However, boxplots\u2019 behavior still presents a total predominance for the male gender in these technologies, as aforementioned.The preference levels of female and male scales under the technology type show that male scales presented a low variation between medians Fig. . In contThe analysis was segmented into two parts to facilitate results interpretation. The first part is related to evaluating the impact of color bias data only through the main pages belonging to educational technologies. The second part evaluated the combination of pages of each technology to understand the relationship between bias levels and their respective pages, adjusted to context, target audience, and age group, providing a deeper analysis.p-value for data belonging to a non-standard distribution confirm this 10%; (ii) 20% and, lastly, (iii) 30%. The results showed that the male bias level is always higher than the female in the technologies evaluated in this experiment. Beyond a high effect size, degrees of freedom (df) indicate the number of ways or dimensions in which the preference levels can move without violating the restrictions, therefore, continuing to have a significant result.Results of the comparison between the calculated male and female preference levels in each technology were organized with trimming levels and reliability levels, considering preference bias and effect size Table . The comIn order to understand comparisons between the quantiles The variation among male and female preference levels was plotted alongside its preference intervals Fig. . The plo\u22120.4947 taking into account the strength of correlation on standard scales. The p-value for this comparison was of 0.00002, indicating a significant correlation in this analysis.Results of the robust correlation level among preference levels, as well as their statistical significance, were calculated considering the critical reliability value of 95% CMS\u2014content management systems; (ii) RLE\u2014remote learning environments (AVA\u2014Ambientes Virtuais de Aprendizagem); (iii) Gamified Environments; and lastly (iv) MOOCs\u2014Massive open online courses.p-values (<0.001) presented statistically significant differences, indicating noteworthy differences among color bias in their technologies. A paired analysis using adjustment of denominated p post hoc tests on the trimmed means was conducted to highlight divergent technologies or those which possess high levels of preference bias . The existing correlation between educational technologies\u2019 colors that consider color preference for the female gender is also weak to moderate, with a value of 0.26.Results for technology types color bias were calculated separately for gender. For males, as Table . Resultsts Table . Moreovep values (p\u2009=\u20090.38063) Table .Table 12Statistically significant differences were also found in educational technologies by teaching subjects on the female scale Table . Test reThe age-group analysis did not indicate significant differences between males Table considerp-value of <0.001 \u2014systems built exclusively for content management, presented colors tied to the solution archetype and its respective educational resources. The student\u2019s follow-up is even higher since it is a presentation and content exhibition of educational technology. According to De la Varre et al. , the null hypothesis was rejected, indicating the presence of e et al. , about eH3.1 \u201cstatistically significant differences between the color levels in educational technologies by context\u201d. Hence, rejecting the null hypothesis. Nonetheless, the technology context with the highest level of male color bias and the lowest level of female bias was Computer Science. Once again, since women in this field of study can be considered a minority , can this bias be a fundamental factor for women\u2019s disinterest and evasion rates in this course modality? Some studies discuss representativeness and mediators such as anxiety of women in these courses , showed two strands for each age group. The first strand did not reject the null hypothesis for male color bias, and the second rejected the null hypothesis for female bias. The literature on color psychology and their preferences identifies that each age and gender presents a certain level of preferred colors. While divergence of colors can be based on gender, obtained through the scales used in this study, different age groups might present it as well (Hallock, Finally, the results of the fourth research question, color bias in educational technologies by age group (Hallock, . It is pThe presented and discussed results in this study align with the current literature. Despite both scales being independent, the results present evidence of the strong predominance of colors belonging to the male scale in these evaluated technologies. In other terms, educational technologies are elaborated with a strong bias toward the male gender. This bias can be related to the more significant number of male students who graduate in the listed fields of the study compared to the number of female students who seek universities or further education in these areas.Nevertheless, the development of technologies that consider the possibility of color customization is still limited. Different technologies, regardless of the type and applied context, present low variance in color use when compared to each other. Furthermore, based on our results, gender should be a factor of utmost importance to make educational technologies more inclusive and egalitarian. This limitation is perhaps an associated cause of the evasion of female students in the STEM fields.Despite independent preferences in the scales, it was possible to observe a dichotomy between colors, reinforcing the opposite effect of gender-related preferences. The existing correlation between male and female colors showed a moderate negative effect, indicating an opposite effect to the effect observed.This study comprised only 73 educational technologies collected randomly, with 3136 pages from the WEB. With their respective ages, the target group could be better defined if more precise information was available on the educational technology\u2019s websites. Moreover, the number of users was estimated based on the report for some technologies, which can indicate an inaccuracy of the number of students, indicating only the number of registered students. We acknowledge that while there can be cases of more than one student using the same profile, there is also the possibility of students having more than one profile, thus causing variation in the actual number of users.In the future, we plan to expand this study aims and collect data to observe the effect of textual elements also extracted from the educational technologies to analyze negative stereotypes contained in the textual content. Furthermore, future work is intended to improve analysis towards age group, considering the preference scale in this study. Additionally, we intend to increase the dataset generated in this study to build models to use artificial intelligence capable of predicting male and female color bias."} +{"text": "Salmonella enterica serovar is the causative bacterial agent of typhoid fever. Environmental surveillance of wastewater and wastewater-impacted surface waters has proven effective in monitoring various pathogens and has recently been applied to Salmonella Typhi. This study evaluated eight sample collection and concentration methods with 12 variations currently being developed and used for Salmonella Typhi surveillance globally to better understand the performance of each method based on its ability to detect Salmonella Typhi and its feasibility. Salmonella Typhi strains Ty21a and Ty2 were seeded to influent wastewater at known concentrations to evaluate the following methods: grab sampling using electropositive filters, centrifugation, direct enrichment, or membrane filtration and trap sampling using Moore swabs. Concentrated samples underwent nucleic acid extraction and were detected and/or quantified via quantitative polymerase chain reaction (qPCR). Results suggest that all methods tested can be successful at concentrating Salmonella Typhi for subsequent detection by qPCR, although each method has its own strengths and weaknesses, including the Salmonella Typhi concentration it is best suited for, with a range of positive detections observed as low as 0.1\u20130.001 colony-forming units (CFU) Ty21a/mL and 0.01 CFU Ty2/mL. These factors should be considered when identifying a method for environmental surveillance and will greatly depend on the use case planned. Salmonella enterica serovar Typhi . Humans are the only known natural host and reservoir for Salmonella Typhi, which is spread fecal-orally through water, food, or objects contaminated with feces of an infected individual.,,,The WHO estimates the annual global death toll from typhoid fever to be between 128,000 and 161,000 people.Salmonella Typhi is shed through feces, the organism is expected to be in wastewater or wastewater-impacted surface waters of locations with outbreaks or endemic transmission.,Salmonella Typhi may inform on disease burden in the population and help identify typhoid hot spots. Environmental surveillance has been shown to support the reduction of diseases and prevention of outbreaks of various enteric pathogens.\u2013Given the challenges with conventional approaches to typhoid surveillance, novel strategies are necessary. One strategy that has proven effective in monitoring pathogens is environmental surveillance (ES). Environmental surveillance is the collection of soil, water, air, or other environmental samples and analysis for pathogens. Because Salmonella Typhi in water sources; this study was focused on wastewater.\u2013,,\u2013,\u2013,,Salmonella Typhi have historically used the Moore swab method.,,\u2013Salmonella Typhi, recovery has been inconsistent and culturing is challenging; therefore, qPCR is increasingly used for detection of Salmonella Typhi in wastewater.,,Salmonella Typhi, whereas culture confirms the viability of an organism and permits comparative genomic analysis.Multiple collection and concentration methods have recently been developed and applied globally for downstream analysis to detect Salmonella Typhi from environmental wastewater and wastewater-impacted surface water samples, none would be uniformly appropriate as a stand-alone method for all study designs, sampling locations, and wastewater matrices. The aim of this study was to evaluate various sample collection and concentration methods currently being developed and used for Salmonella Typhi surveillance globally to better understand the performance of each method based on its ability to detect Salmonella Typhi and its feasibility. Eight methods with 12 variations were evaluated in total for the detection of Salmonella Typhi in a wastewater matrix. Additional ES methods for Salmonella Typhi, including dead-end ultrafiltration and hollow fiber ultrafiltration, were not included in this study because of time and resource constraints. The methods evaluated consisted of grab sampling using electropositive filters, centrifugation, direct enrichment, or membrane filtration and trap sampling using Moore swabs. Concentrated samples underwent nucleic acid extraction and were analyzed via qPCR for Salmonella Typhi. The detection of Salmonella Typhi, each method\u2019s feasibility , and potential use cases were also evaluated.Of the methods used for sampling and detection of Salmonella Typhi were used as positive controls in this study, Ty21a and Ty2. Salmonella Typhi strain Ty21a is used in the oral, live attenuated typhoid vaccine and is negative for the Vi antigen. Salmonella Typhi strain Ty2 is a well-characterized, reference strain that is positive for the Vi antigen. To confirm the presence of the Vi antigen in Ty2 prior to experiments, a Vi antigen agglutination test was performed using Difco\u2122 Salmonella Vi Antiserum . Salmonella enterica serovar Typhimurium was used as a negative control in the agglutination test. Ty21a , Ty2, and Salmonella Typhimurium were obtained from Dr. Stephen Libby (University of Washington). For each strain, 10 \u00b5L antiserum and 10 \u00b5L overnight culture were combined on a glass slide and examined for agglutination after 15 minutes. Agglutination indicated the presence of the Vi antigen in Ty2.Two commonly used strains of Throughout this work, Ty21a was grown using LB-Miller broth , and Ty2 was grown in the dark using LB-Miller broth with a supplemental aromatic amino acid mix and 50 ng/mL ferrioxamine E . The aromatic amino acid mix was prepared in a 100\u00d7 stock consisting of L-Phenylalanine (4 mg/mL) , L-Tryptophan (4 mg/mL) , 2,3-dihydroxybenzoic acid (1 mg/mL) (TCI America), and para-aminobenzoic acid (1 mg/mL) , which were dissolved in deionized water and filter sterilized. To improve our understanding of Ty21a and Ty2 growth and therefore ensure that experiments were seeded during exponential growth, growth curves were determined for Ty21a and Ty2 and measured via optical density at 600 nm and spot plating of 100 \u00b5L of relevant dilutionsSupplemental Figure 1). Methods tested at the same concentration level used primarily the same initial wastewater matrix with replicates of three or six to enable comparison between the methods. The seeded concentrations were assessed in parallel for each experiment via spread plating of 100 \u00b5L of relevant dilutions on LB-Miller agar (Ty21a) or LB-Miller agar with a supplemental aromatic amino acid mix and 50 ng/mL ferrioxamine E (Ty2). The seeded wastewater was thoroughly mixed and then distributed using a peristaltic pump while continuously shaken for processing by 1) filter cartridge, 2) differential centrifugation, 3) grab enrichment, 4) membrane filtration, and 5) Moore swab methods during the dry season and can process more than 300 mgd during the rainy season,,,Two variations of the 2-inch filter cartridge method were tested, mentioned hereafter as FC1-D and FC2-D. The main differences between the two methods were the input for DNA extraction and the resulting difference in effective volume assayed , followed by transfer of the supernatant centrifuging again . Additional details are provided in the Supplemental Information.Three versions of differential centrifugation methods were used: DC-D\u201350 mL, DC-SF\u201350 mL, and DC-D\u20131 L. All methods involved centrifugation for 1 minute at 1,000 \u00d7For grab enrichment , 20 mL of seeded wastewater was added to 180 mL of Universal Pre-Enrichment broth and incubated at 37\u00b0C (24 hours). After incubation, 20 mL of the enriched sample was membrane filtered through one mixed cellulose ester (MCE) filter . The membrane filter was cut into 6\u201310 pieces using sterile scissors, placed in a 2-mL screw top tube, and stored at \u221220\u00b0C prior to DNA extraction.Multiple variations of vacuum membrane filtration were tested: MF1-D, MF1-OB, MF1-SC, and MF2-SF . MF1 metTwo Moore swab methods were tested, with one enriched using Selenite F broth (SF) and another using UPE broth .DNA extraction was performed on the samples using the QIAamp PowerFecal Pro DNA Kit according to manufacturer\u2019s instructions, with the following modifications. The input was pelleted 1-mL aliquots of the samples , pelleted secondary concentrates (FC2-D), or sliced membrane filters (GE-UPE and MS2-UPE). The DNA was eluted in 60 \u00b5L and aliquoted into two 30-\u00b5L volumes or eluted in 120 \u00b5L and aliquoted into three 40-\u00b5L volumes and stored at \u221220\u00b0C prior to qPCR.staG gene commonly used for Salmonella Typhi detection in human clinical samples.R2 values were > 0.96. The input for the standard curves was prepared by centrifuging a 1-mL volume of the Ty21a or Ty2 overnight culture harvested during exponential growth , removing the supernatant, and performing DNA extraction as described above. All samples and controls were tested in duplicate or triplicate.Samples were analyzed for Ty21a and Ty2 via a qPCR assay targeting the Salmonella Typhi were defined as those that amplified with a Ct of 40 or lower in one or two of the two technical replicates with an appropriately shaped curve. Samples with a Ct > 40 were assumed to be negative because of the potential for spurious artifacts to interfere with detection. The limit of detection (LOD) was determined to be a Ct of 37 for both Ty21a and Ty2 and was defined as 95% samples positive for this assay on this qPCR instrument used .Samples positive for Salmonella Typhi positivity. This was determined irrespective of the application of enrichment steps and the sample volume processed, as larger sample volumes do not inherently yield greater positivity. However, the effective volume assayed for the various methods was determined as shown below. Methods were not able to be evaluated for recovery efficiency because of the use of enrichment steps in several methods and the use of Moore swabs (collecting an unknown amount of volume).Methods were primarily evaluated for their rate of \u00ae 2016.PV is the sample volume processed, and CfV is the final concentrate volume before DNA extraction, DV0 is the volume entering DNA extraction, and DfV is the final concentrate volume after DNA extraction.Veff is the effective volume assayed and VPCR0 is the volume entering the PCR reaction.The concentration factor and effeThe feasibility of these methods depends on a variety of factors such as timing, volumes concentrated, supplies and equipment required, and safety and 2. TEight methods with 11 variations total were tested for their ability to concentrate wastewater seeded with varying concentrations of Ty21a for detection via qPCR .When seeded at high concentrations , all methods consistently detected Ty21a 100%; . Four meSupplemental Figure 2). Exceptions to this included MF1-OB, which had very low Ct values at a concentration of 0.01 CFU/mL, and MS1-SF and MS-UPE, which had fairly consistent Ct values at all concentrations tested, likely because of the enrichment step used. With a seeded concentration of 10,000 and 100 CFU/mL, all samples yielded Ct values < 40, with less than a 1-log variation in Ct values between replicates for a majority of sample types .In general, as the concentration of Ty21a seeded in the samples decreased, the Ct value increased linearly until the lowest Ty21a concentrations were tested, which had similar Ct values to the next lowest concentration .Five methods were further tested with Ty2: FC2-D, DC-D\u20131 L, MF1-D, MS1-SF, and MS2-UPE. When seeding 0.1 CFU Ty2/mL, four methods yielded high detection rates: FC2-D, DC-D\u20131 L, MS1-SF, and MS2-UPE > 89%; . When thThis study examined the detection and percent positivity of Ty21a by 11 ES concentration methods and Ty2 by five ES concentration methods. Methods were not optimized or evaluated for recovery efficiency, but were rather chosen based on percent positivity and feasibility. As anticipated, when the concentration seeded into the wastewater decreased, the positive detection rate generally decreased. Exceptions to this were seen with decreased recovery of Ty21a at an intermediate concentration in FC1-D, FC2-D, and MS1-SF samples, which could be due to variability between experiments, as the wastewater matrix was collected at different times for each experiment, and variability in the Ty21a strain, as it was attenuated. Typically, methods that assayed larger volumes of the initial sample generally yielded a higher positive detection rate at low-seeded concentrations than did methods that assayed smaller volumes of seeded wastewater. A high percentage of recovery was measured at low Ty21a seeding concentrations for FC1-D and FC2-D methods, which assayed the largest volume of the initial sample yielding the highest rate of positive detection, followed by FC2-D (123 mL) and MF1-D (2 mL). Finally, where the same assay was used, Ty21a was detected at lower seeded concentrations than Ty2. This could be due to differences in assay sensitivity to Ty21a and Ty2, variability in experiments or wastewater matrix used, differences in the organisms captured by these methods, or differences in the stability of these organisms in the wastewater matrix throughout processing.Method selection depends on field logistics, laboratory constraints, project design, and budgetary considerations; an appropriate ES method will operate within these confines while maintaining effective performance. Local field conditions and infrastructure impact appropriate surveillance sites and therefore the appropriate matrix. These matrices will vary by available sample volume, total solids, and solid characteristics , which in turn will inform appropriate sampling and concentration method selection. For example, matrices with high solids content may be a challenge for filter cartridge and membrane filtration methods because of filter clogging, whereas methods such as GE-UPE, differential centrifugation, and Moore swabs are able to process samples with high solids content. Filter cartridge methods rely on the adsorption of negatively charged bacteria onto positively charged filters with a pore size of 2\u20133 \u00b5m; thus, these methods may be less applicable in turbid waters as less volume may be able to be filtered, which will affect recovery. Additionally, sampling sites with very low flows may not be applicable for methods that process large volumes, such as filter cartridge, DC-D\u20131 L, or membrane filtration.Site access and field-worker safety are critical considerations for sample collection and in-field processing. If sample shipment between the collection site and processing laboratory is required, then this may impact the sample volume , the need to perform primary concentration at the field site , and/or the ability to maintain sample integrity over the period of transport. It is important to note that the Moore swabs methods tested in this evaluation were not representative of what happens in their intended use case, as the experimental Moore swab was held in a recirculating system with multiple exposures to the same seeded wastewater rather than placed in a drainage with exposure only to new wastewater throughout the holding period.Available laboratory equipment and supplies, physical space, and personnel time also impact the ability to conduct methods. For example, although the filter cartridge methods use a commercialized kit, making procurement simple, they also require a large centrifuge and shaking table. Centrifuge capacity is also a challenge for DC-D\u20131 L, particularly if the centrifuge does not contain a cooling mechanism or if multiple samples must be processed in 1 day. Access to a house vacuum or strong vacuum pump could also be a challenge preventing use of GE-UPE, membrane filtration methods, and MS2-UPE. Laboratorian safety is critical when choosing an appropriate ES method. Both selenite cystine broth (SC) (used for MF1-OB and MF1-SC) and SF broths must be prepared and used under a biosafety cabinet capable of chemical protection because of their acute toxicity and teratogenicity. Selenite-based enrichment broths cannot be autoclaved and must be disposed of as hazardous chemical waste because of their aquatic toxicity. Therefore, these chemical hazards are important considerations in settings where selenite-based compounds cannot be contained or disposed of appropriately.Finally, study design, time to results, and associated budgetary considerations necessarily impact method selection. In cases where results are needed rapidly, filter cartridge methods, DC-D\u201350 mL, DC-D\u20131 L, or MF1-D may be ideal, as results can be obtained in < 24 hours as no overnight incubation steps are required. This combination of field, laboratory, study, and performance considerations results in a complex decision tree where one outcome is not appropriate for all applications. Thus, it is necessary to have the flexibility to select an appropriate method that meets all or a majority of these needs. Here, different use case scenarios are outlined with potentially appropriate methods .Environmental surveillance data can be used by decision makers for selecting high-burden locations to implement TCV campaigns using either qualitative or quantitative methods. If spread of typhoid is anticipated to be high within a population, then effective volume assayed and sensitivity will be less critical to delineate true positive results, and results may be quantifiable. However, in lower prevalence areas, an understanding of the lower limit of detection is important to determine fit for purpose. Methods that involve an enrichment step could be used with a substantial increase in the sample number to improve understanding of the generated results. For this use case, all methods would be appropriate .Salmonella Typhi would be anticipated because of vaccine efficacy; therefore, an ES method with high sensitivity and a high effective volume assayed would be appropriate. Additionally, a quantitative method could best inform on Salmonella Typhi presence before and after vaccine distribution. Although a qualitative method could be used, this would require a substantial increase in sample number (via Moore swab) or splitting of a sample and enriching at multiple input volumes to see an effect via a most probable number approach. Appropriate methods may include filter cartridges, high-volume differential centrifugation without enrichment, and membrane filtration without enrichment would be appropriate and might increase the likelihood of detection. Moore swabs allow a large volume to pass through the swab over time, thus increasing the potential for samples to collect the target organism. Enrichment increases the copy numbers of the target organism prior to assay via qPCR or culturing, subsequently increasing the potential to capture the organism in the volume assayed as well as informing on the viability of enriched bacteria. Potential methods include filter cartridges, differential centrifugation, membrane filtration, and Moore swabs , the number of replicates, and challenges with homogenizing a low concentration of Ty2 throughout a sample. Additionally, we expected to see large variation between replicates at low concentrations that approached the assay limit of detection.Salmonella Typhi. This assay was originally designed for clinical samples and was recently applied for detection in environmental samples. However, wastewater contains a variety of organisms as well as both known and unknown DNA in the samples, which could interfere with this assay and result in cross reactivity.Salmonella Typhi using this assay may also vary, as some methods concentrate larger volumes, resulting in greater concentration of potential qPCR inhibitors. To minimize these effects on reported results, undiluted and 10-fold sample dilutions were assayed to screen for evidence of inhibitors. Future studies should examine improved qPCR assays for ES samples specifically as well as digital qPCR assays. Archived samples from this study could be retested with the optimized method.This study used the Nga et\u00a0al.Salmonella Typhi ES methods, though some other ES methods were not included. For example, additional ES methods such as dead-end ultrafiltration and hollow fiber ultrafiltration were not included in this study because of time and resource constraints. Future studies should expand on the types of methods evaluated.Finally, this study evaluated multiple Salmonella Typhi Ty21a and Ty2. Results suggest that all methods tested can be successful at concentrating Salmonella Typhi for subsequent detection by qPCR, although each method has its own strengths and weaknesses, including the Salmonella Typhi concentrations for which they are applicable. These factors should be considered when identifying a method for Salmonella Typhi ES and will greatly depend on the use case planned. Future studies could benefit from examining additional ES methods not used here and conducting side-by-side evaluations with field samples.This study evaluated eight methods with 12 formats for their applicability to conduct ES using Supplemental materials"} +{"text": "A receptors in vivo in the brain using positron emission tomography (PET) imaging and 11C-flumazenil. Diazepam abolished tramadol-induced seizures, in contrast to naloxone, cyproheptadine and fexofenadine pretreatments. Despite seizure abolishment, diazepam significantly enhanced tramadol-induced increase in the brain serotonin (p < 0.01), histamine (p < 0.01), dopamine (p < 0.05) and norepinephrine (p < 0.05). No displacement of 11C-flumazenil brain kinetics was observed following tramadol administration in contrast to diazepam, suggesting that the observed interaction was not related to a competitive mechanism between tramadol and flumazenil at the benzodiazepine-binding site. Our findings do not support the involvement of serotoninergic, histaminergic, dopaminergic, norepinephrine or opioidergic pathways in tramadol-induced seizures in overdose, but they strongly suggest a tramadol-induced allosteric change of the benzodiazepine-binding site of GABAA receptors. Management of tramadol-poisoned patients should take into account that tramadol-induced seizures are mainly related to a GABAergic pathway.Tramadol overdose is frequently associated with the onset of seizures, usually considered as serotonin syndrome manifestations. Recently, the serotoninergic mechanism of tramadol-attributed seizures has been questioned. This study\u2019s aim was to identify the mechanisms involved in tramadol-induced seizures in overdose in rats. The investigations included (1) the effects of specific pretreatments on tramadol-induced seizure onset and brain monoamine concentrations, (2) the interaction between tramadol and \u03b3-aminobutyric acid (GABA) Opioid overdose is the first cause of drug-induced poisonings and fatalities in the US . A substTramadol poisoning causes coma (~30%), seizures ~15%), agitation ~10%) and respiratory depression (~5%) ,5. Based%, agitat% and resMechanisms of tramadol-induced seizures remain poorly understood. Seizures are usually included in the SS and related to serotonin receptor overstimulation, mainly the 1A and 2A receptor subtypes, by increasing synaptic serotonin concentration . Based oA rat study was designed to clarify the mechanism of tramadol-induced seizures. Various pretreatments were used to investigate the impact of the hypothesized neurotransmission systems on the onset of tramadol-induced seizures and the brain monoamine content.p < 0.05) and decreased temperature (p < 0.01) in tramadol-treated rats in comparison with the control. Tryptophan concentration was not significantly modified. Diazepam pretreatment significantly increased tramadol-induced effects on histamine, norepinephrine and dopamine concentrations as well as its effects on 5-HIAA, MHPG and HVA concentrations . Naloxone significantly increased tramadol-induced effects on histamine concentrations (p < 0.01) as well as its effects on MHPG and DOPAC concentrations (p < 0.05). Cyproheptadine significantly increased tramadol-induced effects on histamine concentration (p < 0.01). Fexofenadine significantly increased tramadol-induced effects on HVA and DOPAC concentrations . No pretreatment resulted in significant changes in the absence of tramadol-induced effects on tryptophan concentrations.Tramadol overdose was responsible for a significant increase in histamine, serotonin, norepinephrine and dopamine concentrations , nditions . In ratsd septum A,B.11C-flumazenil brain kinetics was observed following i.v. 1 or 25 mg/kg tramadol injection after 30 min PET acquisition. In comparison, a marked displacement was observed after the injection of the reference ligand diazepam (1 mg/kg) administered in the same conditions but only at high concentrations (100 \u00b5M), possibly correlating with convulsion onset in vivo [11C-flumazenil binding to most brain regions, suggesting a molecular interaction between tramadol and GABAA receptors in the brain accounting for its seizing activity.GABA is the main inhibitory neurotransmitter, present in ~30% of central synapses. The inhibition of GABAergic pathways was hypothesized to actively participate in tramadol-induced seizures. Based on an in vitro study on human recombinant neurotransmitter-gated ion channels, GABA in vivo . Here, iNDBP (=Bmax/KD) observed after tramadol administration. Increasing doses of tramadol were not able to displace 11C-flumazenil binding from the brain [Several hypotheses may explain the decrease in he brain C, supporn humans . A decre control ,43,44. Tlepticus ,46. In olepticus . Isoflurlepticus ,48, can (100\u00b5M) .A receptor is a ligand-gated ion channel, composed of five subunits forming the Cl- channel [- influx by allosteric interaction at the Cl- channel [A receptors by high dose tramadol may be hypothesized to explain the decrease in NDBP. This hypothesis is consistent with the rapid onset of seizures, observed as soon as 5 min after tramadol injection [A receptor supports such an interaction consistent with several proconvulsant antagonists that interact on various sites of the receptor [- channel [A receptor at a different binding site from the benzodiazepine site and that allosteric modulation resulting from such an interaction explains seizure onset.The ionotropic GABA channel . Both ac channel ,50. Allonjection . The mulreceptor , like bi channel . Our finA receptor interaction may be insufficient to explain tramadol-induced seizures alone. The inhibition of glutamate decarboxylase-mediated GABA synthesis has been reported to explain allylglycine-related seizuronic activity [The hypothesis of tramadol/GABAactivity . Here, tactivity . A recenactivity . ProtectOne of our study strengths was to investigate in vivo tramadol toxicity at elevated doses. Several limitations exist. The chosen doses of pretreatments were based on the literature data rather than on our own experiments and conditions, questioning whether the optimal doses of these agents were used. In this study, as usually recommended, six rats per group were used, thus presuming that any difference that would have required more rats to be established is not clinically pertinent. Regarding the brain monoamine study, sampling was limited to the peak time of seizures (~20\u201330 min post-tramadol injection), although it should be acknowledged that a series of time points correlated with the measured clinical parameters would have been preferable. Another issue consisted in the absence of measurement of seizing activity during PET acquisition and brain sampling for monoamine measurement.11C-flumazenil binding [NDBP. This approach is valid although the pons is a \u201cpseudo-reference region\u201d with limited specific 11C-flumazenil binding in the rat [11C-flumazenil binding induced by anesthesia with isoflurane were normalized to the 11C-flumazenil binding in the pons. Anesthesia with isoflurane is able to decrease monoamine release in the rat brain, including serotonin [It is also important to clarify the possible impacts of anesthesia with isoflurane on our PET findings. In the rat, anesthesia with isoflurane was shown to enhance the brain binding . The pon the rat . Here, perotonin , histamierotonin , dopaminerotonin and noreerotonin . The bra\u00ae diluted in 0.9% NaCl) to obtain a solution of 2 mg/mL. Cyproheptadine and fexofenadine were diluted in saline to obtain solutions of 5.3 and 15 mg/mL, respectively. 11C-flumazenil radiosynthesis and production for intravenous (i.v.) injection were performed as previously described [Male Sprague-Dawley rats weighing 250\u2013350 g at the time of experimentation were used, housed for 7 days before experimentation in an environment maintained at 21 \u00b1 0.5 \u00b0C with controlled humidity and light-dark cycle. Food and tap water were provided ad libitum. Tramadol hydrochloride and naloxone were diluted in sterile water to obtain solutions of 44 mg/mL and 0.4 mg/mL, respectively. Diazepam was diluted in 4% Tween , under ketamine (70 mg/kg) and xylazine (10 mg/kg) anesthesia. Catheters were tunneled subcutaneously and fixed at the back of the neck. Heparinized saline was injected into the catheter to avoid thrombosis and catheter obstruction. Rats were then returned to their individual cages for 7 days, allowing anesthesia washout and complete recovery. On the day of experiment and before drug administration, the catheter was exteriorized, purged, and its permeability checked.Temperature was measured using intraperitoneal (i.p.) implanted temperature transmitters . Sedation level based on a 4-stage scale from 0 (awake) to 3 (coma) was assessed . At stagNeurotransmitter concentrations were measured in the frontal cortex, the main area of serotoninergic projections that exhibited the prominent number of spike-wave discharges after 10 to 40 mg/kg tramadol administration . Followi\u00ae microPET-CT scanner [2, respectively, to insert a catheter in the caudal vein for 11C-flumazenil injection. Anesthetized rats were placed into the micro-PET-CT and a brain CT-scan was first performed. After CT completion, 90- or 60-min dynamic acquisitions were performed starting at the time of 11C-flumazenil intravenous (i.v.) injection, to study (i)\u2014the impact of tramadol on the brain binding of 11C-flumazenil and (ii)\u2014the mechanisms involved in tramadol-induced changes in 11C-flumazenil brain binding, respectively.PET imaging acquisition was performed using an InveonGermany) . Rat ane\u22123) versus time (min). The binding potential (NDBP) of 11C-flumazenil in selected brain regions was estimated using the simplified reference tissue model (SRTM) and the pons as the reference region [11C-flumazenil binding (NDBP) to GABAA receptor .PET data were reconstructed using the FORE + OSEM2D algorithm, including normalization, attenuation, scatter and random corrections. PET images were co-registered using Pmod software to a brain magnetic resonance imaging (MRI) template published by Schiffer et al. to receive 75 mg/kg i.p. tramadol (Tramadol group) or 1.7 mL of sterile water i.p. (Vehicle group), 15 min before 11C-flumazenil injection followed by 60 min PET acquisition.Interaction of tramadol with the GABAergic system was assessed using PET imaging and 11C-flumazenil on its binding site [11C-flumazenil alone (Baseline group) and in addition, 1 mg/kg tramadol (Tramadol-1 group), 25 mg/kg tramadol (Tramadol-25 group) or 1 mg/kg diazepam (Diazepam group), injected i.v. during PET acquisition 30 min after 11C-flumazenil injection.Displacement experiments were performed to address the direct competition of tramadol with ing site . To thatNDBP in the two studied groups were compared using Mann\u2013Whitney U-tests. All tests were performed using Prism version 6.0 . p-values < 0.05 were considered as significant.The results are expressed as median and quartiles (Study 1) and mean \u00b1 SEM (Study 2). To permit the simultaneous analysis of the effects of time and treatments on sedation, temperature and monoamine concentrations (Study 1), the area under the curve (AUC) from T0 to the completion of measurement (120 min) was calculated for each animal and each studied parameter, using the trapezoid method. Thereafter, for sedation and temperature, the AUCs were compared using Kruskal-Wallis tests for comparisons between five groups. For monoamine concentrations, we compared the AUCs using Mann\u2013Whitney U-tests for comparisons two-by-two. Regarding the effects of treatments on seizures, comparisons were performed using two-way analysis of variance followed by multiple comparison tests using Bonferroni\u2019s correction. In the PET-imaging study, the A receptor. The serotoninergic, histaminergic, opioidergic, dopaminergic, and norepinephrinergic pathways seem unlikely to be involved. Our data highly suggest that tramadol-induced seizures result from tramadol\u2019s interaction with the GABAA receptor involving a noncompetitive mechanism at the benzodiazepine-binding site. Our findings suggest that a strategy primarily relying on benzodiazepines may be appropriate in the management of tramadol-induced seizures.The present study demonstrated that tramadol-induced seizures are only prevented by diazepam, a positive allosteric modulator of GABA"} +{"text": "Hypertension poses a significant burden in the general population, being responsible for increasing cardiovascular morbidity and mortality, leading to adverse outcomes. Moreover, the association of hypertension with dyslipidaemia, obesity, and insulin resistance, also known as metabolic syndrome, further increases the overall cardiovascular risk of an individual. The complex pathophysiological overlap between the components of the metabolic syndrome may in part explain how novel antidiabetic drugs express pleiotropic effects. Taking into consideration that a significant proportion of patients do not achieve target blood pressure values or glucose levels, more efforts need to be undertaken to increase awareness among patients and physicians. Novel drugs, such as incretin-based therapies and renal glucose reuptake inhibitors, show promising results in decreasing cardiovascular events in patients with metabolic syndrome. The effects of sodium-glucose co-transporter-2 inhibitors are expressed at different levels, including renoprotection through glucosuria, natriuresis and decreased intraglomerular pressure, metabolic effects such as enhanced insulin sensitivity, cardiac protection through decreased myocardial oxidative stress and, to a lesser extent, decreased blood pressure values. These pleiotropic effects are also observed after treatment with glucagon-like peptide-1 receptor agonists, positively influencing the cardiovascular outcomes of patients with metabolic syndrome. The initial combination of the two classes may be the best choice in patients with type 2 diabetes mellitus and multiple cardiovascular risk factors because of their complementary mechanisms of action. In addition, the novel mineralocorticoid receptor antagonists show significant cardio-renal benefits, as well as anti-inflammatory and anti-fibrotic effects. Overall, the key to better control of hypertension in patients with metabolic syndrome is to consider targeting multiple pathogenic mechanisms, using a combination of the different therapeutic agents, as well as drastic lifestyle changes. This article will briefly summarize the association of hypertension with metabolic syndrome, as well as take into account the influence of antidiabetic drugs on blood pressure control. Hypertension, which possesses a significant prevalence in the general population, is one of the main constituents of metabolic syndrome. Hypertension is strongly associated with metabolic syndrome through the pathophysiology which involves obesity. Nevertheless, it represents the major risk factor responsible for elevating cardiovascular mortality and morbidity .Hypertension is defined as repeated elevated office systolic blood pressure (SBP) values over 140 mmHg and/or diastolic BP (DBP) over 90 mmHg or average home BP over 135/85 mmHg ,3.Metabolic syndrome (MetS) has serious outcomes regarding the individual\u2019s health, with increasing prevalence nowadays and a significant impact on healthcare systems. Its definition varied over time. MetS consists of several conditions, such as hypertension, elevated fasting glucose (over 100 mg/dL) or type 2 diabetes mellitus (T2DM), decreased high-density lipoprotein cholesterol levels (less than 40 mg/dL in men or 50 mg/dL in women), high triglycerides concentrations (over 150 mg/dL) and waist circumference over 40 inches (men) or 35 inches (women) ..77].In another systematic review and meta-analysis, treatment with SGLT2-i was related to significant reductions in the daytime and nighttime systolic and diastolic BP .p = 0.001) vs. control ..93].p < 0.001) .,99.98,99The effect of SGLT2-i on blood pressure remains unchanged regardless of the dose of SGLT2-i , indepenSide effects of SGLT2-i include mycotic genital infections , urinary infections, volume depletion, arterial hypotension, and dizziness , Fournier\u2019s gangrene of the genital organs; distal lower limb amputations and fractures (for canagliflozin) ,74,77,80A rare complication, especially in patients with severely impaired insulin secretory function, is diabetic ketoacidosis ,74,77,80The kidney, metabolic, and cardiovascular effects explain possible mechanisms involved in lowering blood pressure using SGLT2-i. Kidney effects include the rise of glucosuria, natriuresis, uricosuria and diuresis, reduced intraglomerular pressure, and hyperfiltration.Metabolic effects can be explained by increasing insulin secretion and glucagon to insulin ratio and reducing glucose toxicity; on lipids, metabolism reducing visceral adiposity, epicardiac fat and inflammation; consecutive increased muscle FFA uptake. Heart effects, incompletely elucidated, include reduced cardiac pre- and afterload, but also improvement in cardiac efficacity, reduced myocardial oxidative stress and inflammation consecutive with decreased epicardiac fat.At the level of the blood vessel, improved endothelial function, decreased oxidative stress, arterial stiffness and peripheral vascular resistance.GLP-1 (glucagon-like peptide-1) is an incretin hormone produced by differential posttranslational processing of the proglucagon protein by the enteroendocrine L cells\u201d .GLP-1 receptor agonists are drugs administered as subcutaneous injections . These can be with short-acting and long-acting agents . The glycemic effects of GLP-1 are mainly mediated by binding to its selective heptahelical G-protein\u2013coupled receptor GLP-1R and the formation of cAMP via Gs signaling .Glucose control is attained through several mechanisms of action: augmentation of glucose-dependent insulin secretion, suppressed glucagon secretion, reduced appetite, slowed gastric emptying, and concomitant reduction of food intake.Moreover, GLP-1RAs also exert beneficial roles in multiple organ systems in which the GLP-1 receptors exist, including the cardiovascular system. In clinical trials, the identified effect of GLP-1RAs on BP differed from a slight increase, as suggested in a study with dulaglutide , to neup < 0.05) and increases SBP by 9.8 mmHg [The administration of GLP-1 in intravenous infusion does not cause a decrease in BP . Contrar < 0.01) .In chronic administration, a moderate decrease in SBP was observed . In a meThe most important BBP lowering effect was found with semaglutide 1 mg administered weekly vs. exenatide ten mcg with daily administration (\u22124.6 mmHg vs. \u22122.2 mmHg) without significant difference in DBP .p \u2264 0.001) at 16 wk and \u22122.7 mmHg vs. placebo and 1.90\u2005bpm vs. active control ,114.Simultaneously with the decrease in BP values, an increase in ventricular frequency was also reported . It may In a systematic review and network meta-analysis, including 424 trials (276 336 patients) compared with placebo, the weighted mean differences (WMD) for GLP-1RAs on SBP levels varied from 2.93 to 2.34 mmHg \u2014with exeData from several multicenter, long-term cardiovascular outcome trials (CVOTs) with GLP-1 RAs indicated cardiovascular benefit in patients with T2DM with CVD or at very high/high risk . In addiGLP-1 RAs act at the level of the myocardium through specific receptors and inhibit cardiomyocyte apoptosis . In expeGLP-1 RAs increase natriuresis and diuresis reducing BP , in partAt the CNS level, GLP-1 RAs increase sympathetic activity and decrease vagal activity .p = 0.003) without significant changes for renin or aldosterone or other components of the renin-angiotensin-aldosterone system [p = 0.02), but there were no effects on other renin-angiotensin system components, atrial natriuretic peptides (ANPs), metanephrine or excretion of catecholamines [The renin-angiotensin-aldosterone system (RAAS) is modulated at the renal level . In twele system . Liragluolamines .Dulaglutide was not associated with significant changes in serum aldosterone, plasma renin activity, plasma metanephrines, normetanephrine, or N-terminal pro-brain natriuretic peptide .In addition, GLP1-RAs may improve endothelial cell function, have anti-proliferative effects on smooth muscle cells, limit activation and recruitment of macrophages in atherosclerotic plaques, decrease inflammatory cytokines, and increase endogenous antioxidant defences .Weight loss is important for treating hypertension, being associated with BP reduction . In a stAt the kidney level, GLP-1 RAs increase natriuresis and diuresis, lowering BP and decreasing renal inflammation and oxidative stress. On the blood vessel, decrease vascular resistance partially by raising nitric oxide production at the endothelium level and reducing endothelial dysfunction by inhibiting oxidative stress and inflammation. GLP-1 RAs at the level of the myocardium inhibit cardiomyocyte apoptosis, improve myocardial function and cardiac output, improve myocardial glucose utilization, and reduce inflammation.The combination of GLP-1 RAs and SGLT2-is is of interest because of complementary mechanisms of action: GLP-1 RAs enrich insulin secretion, slow gastric emptying, and lower body weight, and SGLT2-i facilitate urinary glucose excretion and decrease body weight. Agents from both classes have been demonstrated to reduce CV risk. Thus, combined treatment including a GLP-1 RAs and an SGLT2-1, with or without metformin, should be the best choice to start therapy for T2DM with CVRFs because, in addition to reducing glucose levels, body fat will also be decreased as well BP and cardiovascular risk. With these modifications, there is the possibility for a decline in cardiac outcomes, cardiac and total mortality, and a slowdown of the decrease in renal function.In the SURMOUNT-1 trial , a signiThus, SBP decreased from the initial value by 5.6, 8.8, and 6.2 mmHg in patients who received doses of 5, 10, and 15 mg of tirzepatide weekly, while in the control group, SBP increased by 1.8 mmHg. DBP values in the three treatment groups decreased on average by 1.5, 2.4, and 0.0 mmHg, while they increased by 0.5 mmHg in the placebo-controlled group. This clinically significant BP reduction observed in the first 24 weeks of treatment is superior to the BP-lowering effect observed in studies with GLP-1 Ras .The ventricular rate decreased on average by 1.8 bpm in the control group and increased by 0.3, 0.5, and 3.6 bpm, respectively, in the group treated with tirzepatide, similar to the effect of GLP-1 Ras .In this study, the curve of decrease in blood pressure values was steep in the first 24 weeks of treatment, so, later, it remained on the plateau.It should be mentioned, however, that the blood pressure values of the patients enrolled in the SURMOUNT-1 study were normal, one of the exclusion criteria being a BP value >160 mmHg .The hypotensive effect is expected to be more consistent in hypertensive patients.By acting on cardiovascular risk factors , tirzepatide can reduce cardiovascular risk .p < 0.001) and DBP [Few studies have analyzed the effects of DPP-4 inhibitors on BP in T2D. Some studies indicate that sitagliptin may decrease SBP ,141,142,< 0.001) .It is important to attain and maintain an optimal BP target (130/80 mmHg) to reduTreatment for hypertension in patients with diabetes should include any of the antihypertensive pharmacotherapy drug classes with demonstrated to reduce cardiovascular risk: angiotensin-converting (ACE) inhibitors, angiotensin receptor blockers (ARB), thiazide-like diuretics , or dihydropyridine calcium channel antagonists, and the mineralocorticoid receptor antagonists finerenon\u0103 ,148.Nontraditional BP-lowering agents such as SGLT2-i and GLP-1 ARs can be used, but monotherapy may be inadequate to control BP .In patients with resistant hypertension, the addition of a mineralocorticoid receptor antagonist (MRA) may be considered. Recent studies assume that sacubitril/valsartan could be used in the treatment of patients with resistant hypertension, with or without additional MRA therapy .Finerenone is a new, selective, nonsteroidal MR antagonist with a more selective activity than spironolactone and eplerenone. Finerenone blocks MR-mediated sodium reabsorption and mineralocorticoid receptor overactivation . The benThree extensive studies have been published on finerenone: FIDELIO-DKD , FIGARO-p = 0.001) among patients with predominantly stage 3\u20134 CKD with severely increased albuminuria and T2DM; a lower risk of cardiovascular event was observed in the finerenone group [In FIDELIO-DKD, finerenone showed a significant reduction in the primary kidney composite outcome, lowering the risk for CKD progression .The FIDELIO-DKD and FIGARO-DKD trials show that finerenone has a modest impact on SBP in patients with DKD. The mean SBP decline was \u22122.1 mmHg at 12 months in FIDELIO-DKD , and \u22122.2, normal serum potassium, and urinary albumin-to-creatinine ratio >30 mg/dL in addition to an ACE inhibitor an ARB at the maximum tolerated dose [Finerenone is recommended for patients with T2DM, an eGFR 25 mL/min/1.73 mted dose or for pted dose .In the FIDELITY study, at baseline, some of the included patients received treatment with SGLT2-i (6.7%) or GLP-1 RAs (7.2%). The cardiorenal benefits are maintained regardless of whether there is an SGLT2-i or GLP-1 RAs in the treatment .Esaxerenone, another novel non-steroidal MRA, was authorised to treat hypertension and diabetic CKD .In monotherapy, esaxerenone was associated with decreased BP during the study period \u221218.5/\u22128.8 mmHg; add-on to a RAS inhibitor, and the decline was \u221217.8/\u22128.1 mmHg .In the ESAX-DN study, in patients with T2DM with microalbuminuria, esaxerenone showed a raised probability of normalising a higher urinary albumin-to-creatinine ratio and declining the progression of albuminuria .In conclusion, there is much evidence from several CVOTs that indicate cardio-reno-metabolic benefits using an SGLT2-i and GLP-1 RAs in patients with metabolic syndrome at very high/high CV risk or with atherosclerotic cardiovascular diseases. SGLT2-i yielded a more marked BP decline (SBP/DBP \u20132.46/\u20131.46 mmHg) without heart rate differences . The BP-Given that MetS is a constellation of abnormalities such as abdominal obesity, dyslipidemia, hypertension, and hyperglycemia, its treatment includes medication capable of targeting each of these elements. Thus, metabolic diseases are treated with anti-obesity, lipid-lowering, anti-hypertensive and anti-diabetic drugs.Firstly, lifestyle changes are very important in patients with MetS. Losing weight can increase insulin sensitivity, reducing the risk of type 2 diabetes and can lower blood pressure. This can be achieved through diet and regular exercise. However, a potential solution for treating metabolic diseases would be the development of drugs with multiple actions. An ideal drug for MetS therapy involves decreased weight, blood pressure, inflammation, plasma lipids, and blood glucose levels.Secondly, better adherence, rethinking the initial antihypertensive pharmacotherapy, using the triple or quadruple fixed combination that includes small doses of different therapeutic agents, thus targeting multiple mechanisms involved in the pathogenesis of hypertension, is also important.The data of the latest studies have led to a paradigm shift regarding the management of cardiometabolic disorders. The patient-centred approach and new therapeutic classes improve the glycemic balance and reduce numerous cardiovascular risk factors. Early identification and treatment of cardiometabolic factors and conditions associated with metabolic syndrome will be related to a favourable impact on mortality and morbidity. The last years\u2019 research brings hope in obtaining better blood pressure control, in parallel with cardio-reno-metabolic protection."} +{"text": "Anti\u2010programmed death\u20101 (PD\u20101) immunotherapy has drastically improved survival for metastatic melanoma; however, 50% of patients have progression within 6\u00a0months despite treatment. In this study, we investigated host, and tumor factors for metastatic melanoma patients treated with anti\u2010PD\u20101 immunotherapy.Patients treated with the anti\u2010PD\u20101 immunotherapy between 2014 and 2017 were identified in Alberta, Canada. All patients had Stage IV melanoma. Patient characteristics, investigations, treatment, and clinical outcomes were obtained from electronic medical records.p\u00a0=\u00a00.017). Host factors associated with worse median progression\u2010free survival (mPFS) and median overall survival (mOS) included liver metastases, >3 sites of disease, elevated LDH, thrombocytosis, neutrophilia, anemia, lymphocytopenia, and an elevated neutrophil/lymphocyte ratio. Primary ulcerated tumors had a worse mOS of 11.8 versus 19.3\u2009months (p\u00a0=\u00a00.042). We identified four prognostic subgroups in advanced melanoma patients treated with anti\u2010PD\u20101 therapy. (1) Normal LDH with <3 visceral sites, (2) normal LDH with \u22653 visceral sites, (3) LDH 1\u20102x upper limit of normal (ULN), (4) LDH \u22652x ULN. The mPFS each group was 14.0, 6.5, 3.3, and 1.9\u00a0months, while the mOS for each group was 33.3, 15.7, 7.9, and 3.4\u00a0months.We identified 174 patients treated with anti\u2010PD\u20101 immunotherapy. At 37.1\u00a0months median follow\u2010up time 135 (77.6%) individuals had died and 150 (86.2%) had progressed. An elevated lactate dehydrogenase (LDH) had a response rate of 21.0% versus 41.0% for those with a normal LDH (Our study reports that host factors measuring the general immune function, markers of systemic inflammation, and tumor burden and location are the most prognostic for survival. Kaplan\u2013Meier curve of performance\u2010free survival of cutaneous and melanoma of unknown primary patients stratified by prognostic criteria. LDH, lactate dehydrogenase; mPFS, median performance survival; ULN, upper limit of normal. Treatment with v\u2010Raf murine sarcoma viral oncogene homolog B (BRAF) inhibitors and mitogen\u2010activated protein kinase (MEK) inhibitors can induce rapid tumor control with improved overall survival (OS) in BRAF mutant melanoma patients; however, the duration of response is often short lived.Immune\u2010checkpoint inhibitors have no direct anti\u2010cancer effect but rather achieve control by overcoming cancer immune evasion by restoring a host immune response against the tumor.22.1We retrospectively identified all adult individuals with metastatic melanoma treated with either nivolumab or pembrolizumab in the Province of Alberta, Canada between June 2014 and May 2017. Patients were identified using a provincial pharmacy database. The Health Research Ethics Board of Alberta Cancer Committee approved this study.2.2Pembrolizumab was dosed at 2\u00a0mg/kg intravenously every 3\u2009weeks, and nivolumab at 3\u00a0mg/kg intravenously every 2\u2009weeks. Treatment continued until progression, intolerable toxicity, patient decision, or clinical decision to stop treatment. Radiological response was typically assessed every 8\u201312\u2009weeks at the discretion of the treating medical oncologist, with either computed tomography (CT), positron emission tomography (PET)\u2010CT, or magnetic resonance imaging (MRI). Objective responses were determined for each patient as complete response (CR), partial response (PR), stable disease (SD), or progressive disease (PD) using the Response Evaluation Criteria in Solid Tumors (RECIST) 1.1 criteria.2.3We extracted patient characteristics, investigations, imaging, and clinical outcomes from electronic medical records. We obtained all routinely available clinical and pathological parameters at baseline prior to treatment initiation. Only patients with measurable disease and histologically confirmed melanoma were included in our analysis. The American Joint Committee on Cancer (AJCC) 8th edition was used for staging.Our primary objective was to identify baseline host and tumor factors that were associated with OS from the initiation of anti\u2010PD\u20101 therapy. Secondary objectives were to identify host and tumor factors associated with ORR, and progression\u2010free survival (PFS). Given that molecular studies have demonstrated that MUP have cutaneous genetic signatures, they were included along with the cutaneous cohort for analysis investigating baseline prognostic factors.2.4\u03b1\u00a0=\u00a00.05) in the univariate analysis. Patients with missing data were excluded from multivariate analysis. Patients who did not die during the observation period were censored.Kaplan\u2013Meier estimates were used for estimating median PFS (mPFS) and median OS (mOS) times with 95% confidence intervals (CI). mOS and mPFS were defined as the shortest time the survival probability drops to 0.5 or below on Kaplan curves. Brookmeyer\u2013Crawley methods were used to estimate 95% CIs. We assessed the predictive and prognostic value for each individual parameter using univariate analysis. For univariate analysis, we used Cox proportional hazards model to predict hazard ratios (HRs) and 95% CIs for each factor. Multivariate models were constructed using Cox proportional hazards models by adjusting for factors that were identified as being significant . For Kaplan\u2013Meier analysis, we used survival (version 3.2.7) and suvminer (version 0.4.9) packages. We used R version 4.04 for our Cox proportional hazards models. All statistical analyses were based on 3We identified 174 patients with metastatic melanoma with measurable disease who received at least one dose of anti\u2010PD\u20101 immunotherapy. Table\u00a0Anti\u2010PD\u20101 immunotherapy was first\u2010line in 69 (39.7%), second\u2010line in 37 (21.3%), and \u2265 third\u2010line in 68 (39.1%) Prior treatment included ipilimumab in 90 (51.7%), BRAF and, or MEK targeted therapy in 44 (25.3%) patients and chemotherapy in 57 (32.8%) patients. Pembrolizumab was used in 133 (76.4%) patients and nivolumab in 37 (21.3%). Four (2.3%) received pembrolizumab followed by nivolumab for unique reasons, such as toxicity profile, but additional anti\u2010PD\u20101 treatment after progression for salvage therapy was not practiced. The median number of cycles of anti\u2010PD\u20101 therapy given was 7 (range: 1\u201360).3.1At median follow\u2010up of 37.1\u00a0months, 150 (86.2%) patients had a progression event and 135 (77.6%) patients had died. Of the 174 patients in the entire cohort, 8 (4.6%) had a CR, 43 (24.7%) had a PR, 36 (20.7%) had SD, and 87 (50%) patients had disease control. The mPFS was 3.9\u00a0months and mOS was 12.4\u00a0months.Cutaneous melanomas demonstrated the longest mPFS and mOS at 6.7 and 14.7\u00a0months Table\u00a0. The MUP3.2p\u00a0=\u00a00.016). Age \u226565\u2009years, male sex, BMI, and creatinine were not found to have an association with PFS or OS. There is no statistically significant difference in ORR by age, sex, and ECOG and OS as shown in Table\u00a0p\u00a0=\u00a00.00032) and an OS of 3.4 versus 15.8\u00a0months . Patient with liver metastasis had a shorted PFS of 2.4\u00a0months compared to 7.6\u00a0months and OS of 6.4 versus 16.1\u2009months . Brain metastasis(es) and bone metastasis(es) were not associated with PFS or OS. Patients with \u22653 metastases of any location had a significantly worse PFS of 12.2 versus 3.2\u00a0months , and OS of 8.4 versus 20.3\u00a0months (Table\u00a0p\u00a0=\u00a00.022), as shown in Table\u00a0p\u00a0=\u00a00.017).Elevated LDH had a reduced PFS and OS of 6.1\u00a0months compared to 17.7\u00a0months (Table\u00a0p\u00a0=\u00a00.00013), while the OS was 1.3 versus 14.7\u2009months . Neutrophilia had a shorter PFS and OS . Lymphocytes <1.0 were found to have a numerically shorter PFS and significantly worse OS . A neutrophil to lymphocyte ratio of \u22654 was associated with a worse PFS and OS . Response rates were not statistically different for anemia, neutrophilia, thrombocytosis, or neutrophil to lymphocyte ratio.Anemia was associated with a worse PFS of 2.3 versus 7.8\u2009months and OS which was not statistically significant (Table\u00a0p\u00a0=\u00a00.033), and superior OS 21.0 versus 10.8\u00a0months .Patients with prior BRAF and/or MEK inhibitor targeted therapy had a trend toward worse PFS and a non\u2010significant worse OS of 7.5\u00a0months compared to 14.8\u00a0months. Location of melanoma primary and CSD were not associated with either PFS of OS. The 62 patients with an ulcerated lesion had a worse mOS of 11.8 versus 19.3\u2009months . Other histology factors including Breslow thickness, tumor\u2010infiltrating lymphocytes (TILs), mitosis, lesion pigmentation, or regression were not associated with PFS or OS as seen in Table\u00a0Pathological parameters for 115 patients with cutaneous melanoma are shown in Table\u00a0p\u00a0=\u00a00.095). Superficial spreading, nodular, and lentigo melanoma had no difference in mPFS or OS. Response rate in desmoplastic melanoma and other histological subtypes are summarized in Table\u00a0Different histology subtypes are reported in Table\u00a03.7p\u00a0=\u00a00.046), LDH 1\u20102X elevation (p\u00a0=\u00a00.05), LDH \u22652X ULN elevation (p\u00a0=\u00a00.001), anemia (p\u00a0=\u00a00.002), thrombocytosis (p\u2009\u2264\u20090.001), neutrophilia (p\u2009\u2264\u20090.001), lymphocytes below <1.0 (p\u00a0=\u00a00.043), and presence of liver metastases (p\u00a0=\u00a00.007). Statistically significant association for OS was not maintained with multivariate analysis for ECOG \u22651 (p\u00a0=\u00a00.075) and presence of ulceration in the primary (p\u00a0=\u00a00.076).Host factors and tumor factors that had a statistically significant OS were included in a multivariate model and are reported in Table\u00a03.8In exploratory analysis, we used LDH and number of sites of disease to separate patients into four subgroups: (1) normal LDH and <3 sites of metastases; (2) normal LDH and \u22653 sites of metastases; (3) LDH 1\u20102x ULN, and (4) LDH \u22652x ULN. mPFS were 14.0, 6.5, 3.3, and 1.9\u00a0months, and mOS were 33.3, 15.7, 7.9, and 3.4\u00a0months for the four subgroups, respectively Figure\u00a0.p\u00a0=\u00a00.028) and 4 relative prognostic Group 1, which served at the referent group (Table\u00a0p\u00a0=\u00a00.016) and Subgroup 4 in comparison to Subgroup 1. Subgroups 1 and 2 both had normal LDH and the survival differences were not statistically significant. Response rates by subgroups are summarized in Table\u00a0The four prognostic groups were adjusted for potentially confounding variables using multivariate analysis. Multivariate analysis showed a statistically significant inferior PFS for subgroups 3 , and median survival was significantly prolonged reinforcing the fact that the host immune system is integral for cancer control.The \u201ccancer\u2010immune set point\u201d defines the equilibrium between factors promoting or suppressing cancer eradication.p\u00a0=\u00a00.017) as shown in Table\u00a0For host factors associated with survival, only elevated LDH was predictive of response. Elevated LDH is an independent prognosticator of poor survival in metastatic melanoma,In melanoma, individual biomarkers represent an individual component of a dynamic cancer\u2010immune interaction, and alone are unable to reliably predict responses. The cancer immunogram described by Blank et al is a framework describing the essential component for an effective immune response against the malignancy.We applied Long's BRAF melanoma prognostic approach to our cohort based on our findings that LDH and number of metastases were both highly associated with survival.Our results must be interpreted with caution given the retrospective nature and small sample size. Our limited sample size, especially for histology may be underpowered to detect survival outcomes for primary tumor factors. We also have inferior survival data to prospective studies with single anti\u2010PD1 immunotherapy. An important reason for this inferior survival is that this population was heavily pretreated, with advanced metastatic disease. Over 60% of patients received anti\u2010PD1 immunotherapy as \u2265 second\u2010line treatment, and only around 51% of the analyzed patient population had a normal LDH.5Further improvement in predicting response to immunotherapy will require incorporation of essential components of a dynamic tumor immune system based on the concept of the \u201ccancer immunogram\u201d. Our study suggests that cancer\u2010associated inflammation may present an additional component to the cancer immunogram. Strategic targeting of cancer\u2010associated systemic inflammation may be as important as enhancing adaptive immunity. Research targeting immune\u2010suppressive tumor metabolism remains ongoing.6Our study reports host factors measuring general immune function, tumor burden and location and markers of systemic inflammation are the most prognostic for survival. Among these, only LDH was associated with response, suggesting that an elevated LDH predicts a suppressed and impaired response to anti\u2010PD\u20101 immunotherapy. Markers of cancer\u2010associated systemic inflammation including neutrophilia, thrombocytosis, and anemia are strong predictors of worse survival when treated with anti\u2010PD1 immunotherapy. Tumor histology played a much smaller role with only tumor ulceration associated with worse survival, and some specific melanoma subtypes are likely to benefit more than others. Our findings provide evidence that the host immune system is responsible for surpassing a threshold to mount a sufficient immune response against cancer. Using LDH and number of sites of metastases, we can objectively prognosticate patient outcomes on anti\u2010PD1 immunotherapy. This model if validated, can help clinical decision making in treating advanced melanoma.Conceptualization, Kim Koczka, Rodrigo Rigo, Aleksi Suo, and Tina Cheng; software and formal analysis, Mohammad Asad and Isabelle Vallerand; resources, Edwin Wang, Rodrigo Rigo, Eugene Batuyong, and Tina Cheng; writing\u2014original draft preparation, Kim Koczka; writing\u2014review and editing, Kim Koczka, Sara Cook, Tina Cheng, and Aleksi Suo; supervision, Tina Cheng, Edwin Wang. All authors have read and agreed to the published version of the manuscript.There was no financial support in writing this manuscript.There are no conflicts of interest to disclose.Table S1Table S2Table S3Table S4Click here for additional data file."} +{"text": "Torymus sinensis, the biocontrol agent of the gall wasp Dryocosmus kuriphilus, is univoltine, and exhibits a prolonged diapause. Further investigations have been carried out to assess the extent of the diapause and its trend over the years. Moreover, the seasonal variation in the galls\u2019 toughness was measured to assess if the wall of dry galls formed in the previous year was so hard to counteract T. sinensis emergence, thus negatively affecting diapause. The window of vulnerability of the galls was also evaluated in controlled conditions. The results showed that the average number of second year T. sinensis emerging per 100 cells was 0.41 \u00b1 0.05, and dead adults accounted for 4.1 \u00b1 0.23 per 100 cells. Gall toughness resulted in lower values for galls collected in May and June. In general, no difference was detected in the wall toughness of galls formed during the previous year when compared to current-year dry galls. Comparing the number of oviposition events by T. sinensis and the gall toughness, a negative correlation was found. Descriptive information on this gall\u2019s structural traits and the influence on gall wasp management are also discussed. Torymus sinensis, the biocontrol agent of the Asian chestnut gall wasp Dryocosmus kuriphilus, is univoltine, but in NW Italy a small percentage of individuals exhibits a prolonged diapause, mainly as late instar larva. (2) In 2020, the diapause was investigated to evaluate its trend over the years. Due to the low survival rate of diapausing T. sinensis adults, the seasonal variation in the galls\u2019 toughness was evaluated, thus assuming that dry galls over time can negatively affect emergence. The window of vulnerability of the gall wasp galls was also evaluated in controlled conditions. (3) The results showed that the average number of second year T. sinensis emerging per 100 cells was 0.41 \u00b1 0.05, and dead adults accounted for 4.1 \u00b1 0.23 per 100 cells. Gall toughness resulted in lower values for galls collected in May and June, and then gradually increased over time. In general, no difference was detected in the wall toughness of galls formed during the previous year when compared to current-year dry galls. Oviposition was recorded on all the tested galls collected in May and June, and no difference in the number of oviposition events was detected. Conversely, no oviposition was observed in July. Comparing the number of oviposition events by T. sinensis and the gall toughness, a negative correlation was found (R2 = \u22120.99). (4) The present findings contribute descriptive information on this gall\u2019s structural traits, and the influence on gall wasp management is also discussed. (1) Galls are pathologically developed cells, tissues or organs of plants that have arisen mostly by hypertrophy and hyperplasy under the influence of parasitic organisms, such as bacteria, fungi, nematodes, mites, or insects. They represent the growth reaction of plants to the attack of the parasite and are in some way related to the feeding activity and nutritional physiology of the parasite . Quercus, but also Castanea, Chrysolepis and Lithocarpus) and Rosaceae families, but there is also a significant number of herb-galling cynipids [Many insect groups, and an estimated 13,000 species, induce plant galls . Gall wanaceae,) . The oaknaceae,) .The gall former alters the physiological state of plant tissues, particularly that of the cells nearest to the feeding larvae, the so-called nutritive tissue, which is maintained in a metabolically active state by the gall former . Gall tiCynipid gall development can be divided into three phases: initiation, growth, and maturation. Initiation begins with oviposition by the female gall wasp, determining host plant, gall location on the host, and the number of larvae developing in the resulting gall (in relation to the number of eggs laid) .Structurally, galls induced by the sexual or asexual generation of gall wasps are divided into two larval chambers surrounded by an outer layer. Larval chambers are alike in most galls. Near each larval chamber there is a mass of nutritive cells surrounded with a single layer of parenchyma cells. In most galls, these two layers are covered with a third layer of sclerenchyma cells .Dryocosmus species have been reported on Castanea, Chrysolepis, and Quercus in the world [Dryocosmus kuriphilus Yasumatsu (Hymenoptera: Cynipidae). The ACGW, native to China, was first reported in Japan in 1941 [About 30 he world , the bes in 1941 . It atta in 1941 . The gal in 1941 ,13. The in 1941 ,15.D. kuriphilus adults, galls dry, become wood-like and remain on the tree for several years. ACGW galls are known to support species richness, closed communities of inquilines, and parasitoids that have become a model system in community ecology [D. kuriphilus in Italy, generalist native parasitoid species quickly recruited to this novel gall wasp host. Specifically, the community of native parasitoids recorded invading ACGW populations is mainly composed of chalcid species , commonly known to be parasitoids of oak cynipid gall wasps. Although several families have been reported associated with the ACGW in its introduced range , they did not provide effective control of this pest [Galls are uni- or multilocular and contain from 1 to 25 larval chambers ,16, loca ecology . Soon afhis pest ,19,20.Torymus sinensis Kamijo (Hymenoptera: Torymidae) was imported from Japan. The parasitoid was mass reared and released on a large scale in European chestnut-growing areas affected by the gall wasp [T. sinensis (1\u20133%) [To cope with this phytosanitary threat, the biocontrol agent all wasp ,21. Thisall wasp . Low dias (1\u20133%) ,23.Several papers have emphasized the importance of gall characteristics on the success of parasitoids when attacking gall insects, and size, thickness, toughness, and the parasitoid\u2019s ovipositor length are considered to be important parameters affecting the parasitoid oviposition rate and success of gall-forming insects ,25. HardEurosta solidaginis (Fitch) (Diptera: Tephritidae) [Aditrochus coihuensis Ovruski , but no information about the force of penetration was available [Asphondylia fiocossa Hawkins (Diptera: Cecidomyiidae) spring gall tissue was also measured using a similar texture analyzer approach [The gall wall thickness is known to affect the ability of parasitoids to successfully attack the cynipids , and galritidae) , while gvailable . The touapproach , but eveT. sinensis [In 2020, research was carried out to assess the extent of the diapause and if its rate had changed over the years, after the previous observations in 2015 . The surT. sinensis adults (diapausing) inside the galls were already detected by gall dissection [T. sinensis emergence, thus negatively affecting diapause? (iii) since galls can be located in different positions, is there any difference in toughness, comparing galls collected on branch vs. leaf midrib? A prolonged dormancy may affect reproduction, exposing individuals to increased mortality, and both prolonged dormancy and increased mortality may result in fitness costs . Mortalissection . Thus, rT. sinensis females are less inclined to lay eggs in tougher galls, limiting oviposition events to the period before galls mature and harden. To test the gall toughness hypothesis, we evaluated the suitability of fresh galls collected in different months for T. sinensis oviposition, in controlled conditions.Furthermore, we evaluated the window of vulnerability of the gall wasp galls , assessCastanea sativa Miller var. Marrone del Mugello). Trees were approximately 80 yrs old, 20 m in height, planted at 10 m intervals along the row and with a 15 m distance between rows. Tree density was about 100 trees/ha. This survey site was chosen to ensure an adequate presence of galls: ACGW infestation index > 3, according to the index reported by Ferracini et al. [Investigations were performed in 2020\u20132021 in the municipality of Vicchio , located in the Tuscany region . The survey site was characterized by managed sweet chestnut orchards , and stored in rearing cardboard boxes in outdoor conditions [T. sinensis individuals were evaluated.To evaluate the extent of the diapause, in 2020, ten naturally growing chestnut trees were randomly chosen, and for each tree 500 galls were randomly collected on the crown of the plant during winter were collected by the same operator each month, from early May to early December . Galls were identified as either branch galls (occurring on the chestnut shoot) or as leaf midrib galls , in order to have the same number of galls of both types . Since gall morphology (volume and mass) may be influenced by exposure to sun and precipitation , collectT. sinensis, which accounted for less than 20%, in accordance with previous investigations by Ferracini et al. [Moreover, in 2021, ACGW fresh galls were collected to perform the oviposition trials (see specific section). A total of 100 fresh galls of similar size were collected in May, June, and July, for a total of 300 galls. To avoid any influence on the behavior of the parasitoid, chestnut galls were collected in a chestnut orchard characterized by a very low presence of the parasitoid. The collected galls were divided into two subsets. Half of the galls were used in the oviposition trials, and the remaining galls were dissected using a stereomicroscope to evaluate the parasitism rate by 120 dry galls , and 60 fresh galls collected each month from May to December were subjected to the evaluation of the instrumental mechanical properties within 24 h of field collection. A TA.XTplus texture analyzer, equipped with a HDP/90 platform and a P/2N needle probe was used. The load cell used was 50 kg, except for the material obtained during the 2021 season, which allowed the use of a 5 kg load cell to maximize load cell resolution . To evaluate the galls\u2019 toughness, we conducted preliminary investigations to establish the depth of insertion of the needle probe, and to avoid the values being distorted by the presence of very superficial larval chambers. Specifically, a representative sample of galls (N = 50) of different types and size was dissected under the microscope, highlighting how all the larval chambers were located at a depth of at least 1.3 mm . Thus, wFor each test, the distance-force curve was acquired at 500 points per second, and the following parameters were determined : F1 for.Torymus sinensis adults were obtained from a mass rearing at the DISAFA laboratory. Mated six-day-old na\u00efve females were used. One day before the trials, one female was placed in a plastic tube closed with a cotton plug, together with three males to ensure mating according to Ferracini et al. [i et al. . IndividD. kuriphilus gall was offered to a mated T. sinensis female placed on a filter paper sheet inside a Petri dish arena (diameter 10 cm) for 48 h, and 50 replications per month were performed. The number and duration of the oviposition behavioral event were recorded for 45 min using JWatcher\u00ae 1.0 software . Oviposition was considered successful when the female spent more than 60 s with the ovipositor inserted in the gall, according to Ferracini et al. [A single fresh i et al. . Femalesp < 0.05. In this case, significant differences among samples were identified by performing a Tukey-HSD post-hoc test. The gall toughness results on dry and fresh galls, obtained using compression tests, were subjected to one-way analysis of variance (ANOVA), and significant differences were highlighted when T. sinensis. After testing for homogeneity of variance (Levene\u2019s test), data were analyzed using the Student\u2019s t tests (p < 0.05) to compare the number of oviposition events occurring on galls collected in different months with different degrees of toughness. Moreover, the parasitism rate by T. sinensis for the galls collected in the three different months was assessed using a generalized linear model (GLM) following a binomial distribution (logit link function), comparing the number of T. sinensis larvae before and after the oviposition trials. The correlation between the number of oviposition events and gall toughness was investigated, as well. All statistical analyses were performed using R software . The boxplot visualization was prepared with R software plus the \u2018ggplot2\u2019 package.In the behavioral trials, we used a linear regression to investigate the number of oviposition events by T. sinensis emerging per 100 cells was 0.41 \u00b1 0.05, and dead adults accounted for 4.1 \u00b1 0.23 per 100 cells (data on univoltine adults\u2019 emergence are not shown).In 2020, a total of 5000 galls was collected, and adult parasitoids emerged in the spring of the second year (2022), simultaneously with the emergence of univoltine adults observed in natural conditions. The average number of second year To assess the degree and trend of toughness of ACGW galls, 480 galls were collected in 2021 . p = 0.007) at the November sampling point, while E1 was able to significantly (p < 0.05) discriminate between the gall types collected in May, June, October, and November. F2, W0-2, E0-2, and Fmax parameters showed significant differences (p < 0.05) between the gall types considered at July, October, November, and December (excl. W0-2 for the latter) sampling points. Therefore, November was the sampling point in which the most gall toughness parameters were able to discriminate between the two gall types considered.When comparing the two gall types analyzed, significant differences were found for all parameters except W1. F1 values among gall types were significantly different (p < 0.05). A sustained variability of the results was found in these samples with respect to fresh branch and leaf midrib galls (p > 0.05) results for both dry samples when compared with December branch and leaf midrib galls .No significant difference was detected between dry winter galls collected in the current and previous year, except for the W1 deformation energy parameter . Specifically, during the 45\u2032 observation period, 11 oviposition events were detected on galls collected in May and 10 in those collected in June, on average 20 and 16 min after the host location, respectively. Conversely, no oviposition was ever observed for galls collected in July, and each gall encountered was rejected. After 10 days of storage in the climatic chamber, the parasitism rate was significantly higher only for galls collected in May , increasing from 31% to 78%, while no significant increase in parasitism rate was detected in June or July . Comparing the number of oviposition events by T. sinensis and gall toughness, a negative correlation was found (R2 = \u22120.99) .p > 0.05; p > 0.05) values in the first sampling points, except for July ; then, Fmax hardness was significantly higher for branch galls in the three final points . Previous investigations by Gil-Tapetado et al. [In this paper, investigations were carried out with ACGW galls of similar size, and only toughness was evaluated for both fresh and dry ACGW galls, providing descriptive information on this gall\u2019s structural trait. As expected, the maximum recorded penetration force (Fmax) increased when testing newly formed ACGW galls from May to December, and showed consistently high results on dry galls. Only the galls collected in November exhibited lower values with respect to previous months (3.52 and 4.48 N); yet, even in that case, the difference between the galls collected in October and November was not statistically significant was recently recorded in some Italian chestnut-growing areas when current-year ACGW fresh galls are not available [T. sinensis allows the latter to parasitize early in the gall-growing season (April and May) [elopment . Althougvailable . Currentand May) . Our datEuura lasiolepis Smith (Hymenoptera: Tenthredinidae) by Lathrostizus euurae (Gravenhorst) (Hymenoptera: Ichneumonidae), reporting that the number of drills per gall regressed according to the toughness, thus suggesting that a low attack rate on large, old galls in the field is probably due to the toughness of these galls. This is in line with the gall toughness hypothesis, asserting that old galls are not parasitized . Craig eT. sinensis was in line with previous investigations by Ferracini et al. [T. sinensis emergence, negatively affecting diapause. Diapause is a critical state of an insect\u2019s life cycle, when it undergoes the arrestment of growth and/or reproduction to survive adverse environmental conditions and/or food shortage [T. sinensis [The average number of emerging second year i et al. , attestishortage . Althougshortage . The enesinensis ,23, no oT. sinensis emergence is timed to allow females to parasitize ACGW larvae inside the galls [C. sativa varieties and in other European chestnut growing areas could prove useful for further investigations.In natural conditions, he galls . Thus, t"} +{"text": "Idiopathic pulmonary fibrosis (IPF) is a long-lasting, continuously advancing, and irrevocable interstitial lung disorder with an obscure origin and inadequately comprehended pathological mechanisms. Despite the intricate and uncharted causes and pathways of IPF, the scholarly consensus upholds that the transformation of fibroblasts into myofibroblasts\u2014instigated by injury to the alveolar epithelial cells\u2014and the disproportionate accumulation of extracellular matrix (ECM) components, such as collagen, are integral to IPF\u2019s progression. The introduction of two novel anti-fibrotic medications, pirfenidone and nintedanib, have exhibited efficacy in decelerating the ongoing degradation of lung function, lessening hospitalization risk, and postponing exacerbations among IPF patients. Nonetheless, these pharmacological interventions do not present a definitive solution to IPF, positioning lung transplantation as the solitary potential curative measure in contemporary medical practice. A host of innovative therapeutic strategies are presently under rigorous scrutiny. This comprehensive review encapsulates the recent advancements in IPF research, spanning from diagnosis and etiology to pathological mechanisms, and introduces a discussion on nascent therapeutic methodologies currently in the pipeline. Idiopathic pulmonary fibrosis (IPF) is a pervasive chronic pulmonary ailment marked by irreversible lung function loss and structural disfigurement attributable to an overproduction of extracellular matrix deposition , compounThe exact origins and progression mechanisms of IPF remain ambiguous, with aging recognized as the most considerable risk factor . AdditioPresently, lung transplantation stands as the sole clinically validated effective treatment strategy , yet it Genetic constituents serve as fundamental drivers in the initiation and evolution of IPF . A growiFrequently, particulate matter, fibers, and dust constitute the primary environmental contributors to IPF onset . A notewRecent investigations underscore the crucial role of microbiota in inciting and exacerbating pulmonary fibrosis in animal models, thereby elucidating the association between microbiota and pulmonary fibrosis . ContempRecognized as a principal risk factor for chronic respiratory diseases such as COPD , smokingThe pronounced incidence of gastroesophageal reflux disease (GERD) in IPF suggests a pathogenic role for microaspiration attributable to GERD . ChronicThe mechanism by which aging leads to pulmonary fibrosis remains unclear. Cellular senescence can disrupt various cellular biological activities in the body, manifesting as telomere attrition, DNA damage, and mitochondrial dysfunction . ResearcThe pathogenesis of IPF is multifaceted and, as of yet, not wholly comprehended. Nevertheless, several pivotal factors have been pinpointed as significant contributors to the disease\u2019s inception and evolution. Herein is a synopsis of our current understanding of IPF pathogenesis.TGF-\u03b2 is considered a central component among the diverse factors contributing to pulmonary fibrosis development . ReleaseThe insulin-like growth factor has a significant role in pulmonary fibrosis progression . ComposeCTGF, or CCN2, is recognized as a prolific instigator of chronic fibrosis hyperplasia . As a cyMMPs are proactive contributors to pulmonary fibrosis . This fain vitro fibroblast proliferation , such as pirfenidone and nintedanib, only alleviate symptoms without reversing pulmonary fibrosis to facilitate a curative outcome. Consequently, the development of new therapeutic options is imperative. Innovations include investigating novel effects of existing drugs, developing new drugs, and exploring treatments such as stem cell transplantation for IPF. Many drugs are currently under clinical trials, with some advancing to phase 3, thereby expanding the therapeutic arsenal for IPF.Historically, chronic inflammation, seemingly uncontrollable, was perceived as the primary driver of progressive parenchymal fibrosis development . SystemiPirfenidone, the inaugural oral antifibrotic drug to receive approval, is a pyridine derivative widely recognized for the treatment of IPF . Its mecNintedanib, another approved oral antifibrotic drug, operates as an orally active triple tyrosine kinase receptor inhibitor . OriginaLung transplantation presents itself as the sole treatment alternative that can enhance the quality of life and augment survival rates when previous treatments have failed to yield positive outcomes . This liPamrevlumab, a humanized monoclonal antibody, targets CTGF, a fibroblast and endothelial cell-secreted glycoprotein pivotal in the pathogenesis of fibrosis . InvestiMetformin, a time-honored hypoglycemic agent clinically employed in the treatment of type 2 diabetes mellitus, is increasingly recognized for its antifibrotic properties, as corroborated by numerous preclinical investigations . Metformin vivo studies demonstrated that oral esomeprazole mitigated inflammation and fibrosis in rodent models of bleomycin-induced lung injury, with approximately 50% reduction in each parameter (PPIs are currently under investigation as potential therapeutic agents for IPF due to the frequent coexistence of gastroesophageal reflux disease and IPF in clinical scenarios . The antarameter . Severalarameter . Furtherarameter . Given tarameter . LansoprThe exploitation of embryonic stem cells for lung regeneration or repair has gained notable momentum in recent years. Stem cells, essentially immature cells that proliferate and metamorphose into adult cells, demonstrate anti-inflammatory and anti-fibrotic traits, rendering them as a potent potential therapy for fibrotic diseases . MesenchAdditional therapeutic drugs and methods are undergoing investigation, and although the specific mechanisms remain to be thoroughly scrutinized, it does not impede the advancement of exploratory efforts in IPF treatment strategies. Currently, treprostinil and BI 1015550, which are in Phase 3 clinical trials, are under consideration. Treprostinil is a stable prostacyclin analog, a PGI2 receptor agonist, promoting vasodilation and inhibiting platelet aggregation . CertainIn addition, smoking patients are advised to quit smoking and given appropriate traditional Chinese medicine adjuvant therapy, which is helpful to improve the quality of life of patients. At the same time, active pulmonary rehabilitation and oxygenotherapy if necessary, which also play a great role in improving the function of the body and stabilizing or slowing down the development of the disease.Various drugs employed to manage idiopathic pulmonary fibrosis are enumerated above, with their mechanisms of action delineated in While most of these therapies are in their nascent stages of research, they provide substantial reassurance, and it is hoped that they receive expedited approval for the clinical benefit of IPF patients. Research into IPF treatment strategies, buoyed by the introduction of innovative therapeutic agents and treatments, has witnessed a burgeoning number of clinical trials. Some ongoing clinical trials are succinctly presented in 1 Early Diagnosis and Intervention: The importance of early IPF diagnosis cannot be overstated for administering treatment prior to extensive lung damage. Strategies may encompass heightened awareness and education for healthcare professionals, biomarker utilization, and the creation of avant-garde imaging techniques for early disease detection.2 Personalized Medicine: As our grasp of the molecular mechanisms underpinning IPF becomes more sophisticated, opportunities may arise to devise personalized treatment strategies aimed at specific pathways or genetic factors in individual patients. This could culminate in more efficacious, customized therapies with diminished side effects.3 Combination Therapies: Owing to IPF\u2019s multifaceted nature, it is improbable that a single treatment would entirely stymie the disease\u2019s progression. Therefore, combination therapies targeting multiple facets of the disease, including inflammation, fibrosis, and oxidative stress, might enhance IPF management.4 Regenerative Medicine: Delving into regenerative medicine, inclusive of stem cell therapy and tissue engineering, offers potential for novel IPF treatment strategies. The ultimate aim would be to mend or substitute damaged lung tissue, potentially reversing the disease\u2019s effects.5 Improved Support and Symptom Management: While a definitive cure for IPF currently eludes us, optimizing symptom management and providing comprehensive support to patients and their families remain paramount. This entails pulmonary rehabilitation, oxygen therapy, and psychological support to assist patients in grappling with the physical and emotional tribulations associated with IPF.6 Enhanced Collaboration and Research: Sustained collaboration among researchers, clinicians, and pharmaceutical companies is indispensable to catalyze innovation and engender new treatment options for IPF. This calls for a collaborative spirit that encourages data, resource, and knowledge sharing across disciplines to expedite the discovery of new therapeutic targets and augment our understanding of the disease.Despite substantial strides in comprehending IPF and formulating innovative treatment strategies, the labyrinthine nature of this disease continues to mandate further exploration. The contemporary perspectives on IPF treatment can be encapsulated as follows.In summary, although IPF persists as a formidable and intricate disease, the landscape appears promising with the advent of novel treatment options and research advancements. With an emphasis on early diagnosis, personalized medicine, combination therapies, regenerative medicine, improved support and symptom management, and enhanced collaboration and research, the field stands poised to make substantial progress in the foreseeable future."} +{"text": "BRAF and NRAS are oncogenic drivers in malignant melanoma and other solid tumors. Tovorafenib is an investigational, oral, selective, CNS-penetrant, small molecule, type II pan\u2011RAF inhibitor. This first-in-human phase 1 study explored the safety and antitumor activity of tovorafenib.Genomic alterations of This two-part study in adult patients with relapsed or refractory advanced solid tumors included a dose escalation phase and a dose expansion phase including molecularly defined cohorts of patients with melanoma. Primary objectives were to evaluate the safety of tovorafenib administered once every other day (Q2D) or once weekly (QW), and to determine the maximum-tolerated and recommended phase 2 dose (RP2D) on these schedules. Secondary objectives included evaluation of antitumor activity and tovorafenib pharmacokinetics.n\u2009=\u2009110, QW n\u2009=\u200939). The RP2D of tovorafenib was defined as 200\u00a0mg Q2D or 600\u00a0mg QW. In the dose expansion phase, 58 (73%) of 80 patients in Q2D cohorts and 9 (47%) of 19 in the QW cohort had grade\u2009\u2265\u20093 adverse events. The most common of these overall were anemia and maculo-papular rash . Responses were seen in 10 (15%) of 68 evaluable patients in the Q2D expansion phase, including in 8 of 16 (50%) patients with BRAF mutation-positive melanoma na\u00efve to RAF and MEK inhibitors. In the QW dose expansion phase, there were no responses in 17 evaluable patients with NRAS mutation-positive melanoma na\u00efve to RAF and MEK inhibitors; 9 patients (53%) had a best response of stable disease. QW dose administration was associated with minimal accumulation of tovorafenib in systemic circulation in the dose range of 400\u2013800\u00a0mg.Tovorafenib was administered to 149 patients (Q2D BRAF-mutated melanoma was promising and justifies continued clinical development across multiple settings.The safety profile of both schedules was acceptable, with QW dosing at the RP2D of 600\u00a0mg QW preferred for future clinical studies. Antitumor activity of tovorafenib in NCT01425008.The online version contains supplementary material available at 10.1007/s00280-023-04544-5. BRAF gene encoding the serine/threonine-protein kinase BRAF have been identified in 50\u201360% of malignant melanomas, and at similar or lower frequencies in several other cancers . t dimers \u201310. Initt dimers \u201316. In rmelanoma .BRAF fusions. In BRAF wild-type cells, type I BRAF inhibitors can paradoxically cause MAPK activation due to BRAF-inhibitor-mediated homodimerization and heterodimerization of nonmutant RAF isoforms. Type I inhibitor binding to one protomer of a wild-type RAF dimer causes allosteric transactivation of the other protomer, while at the same time reducing the affinity of the drug for that other protomer, resulting in enhanced signaling each). In the QW cohort of the dose expansion phase, 4 of 19 patients (21%) had TEAEs resulting in permanent discontinuation, including atrial flutter, dyspnea, erythema multiforme, and fatigue (1 patient each). In the dose expansion phase, 19 of 99 patients (19%) had TEAEs leading to dose reduction including 17 of 80 patients (21%) in the Q2D cohorts and 2 of 19 (11%) in the QW cohort, the most common of which were maculo-papular rash and generalized rash .There were 13 on-study deaths. The fatal SAEs associated with these deaths predominantly related to the underlying disease or complications thereof and are listed in Supplementary Table S8. Only one death, associated with respiratory failure in a patient in the 280\u00a0mg Q2D dose escalation cohort, was deemed by the study investigators to be treatment related.BRAF mutation-positive, RAF and MEK inhibitor-na\u00efve cohort (cohort 1), 1 of 6 patients (17%) in the BRAF mutation-positive RAF and MEK inhibitor-previously treated cohort (cohort 2), and 1 of 14 patients (7%) in the NRAS mutation-positive RAF and MEK inhibitor-na\u00efve cohort . One patient with an NRAS-mutated melanoma in cohort 7 with demonstrated clinical benefit (42\u00a0months with stable disease) continued to receive tovorafenib after the study ended, under a single patient investigational new drug (IND) application. A complete response was reported after 7\u00a0months of treatment under this single patient IND, which has been sustained with continued treatment for 8\u00a0years [In the Q2D dose escalation phase, there were no responses in 22 evaluable patients; 5 patients 23%) had a best response of stable disease had a best response of stable disease.In the QW dose escalation phase, there were 2 partial responses in 14 evaluable patients (14%); 1 patient with endometrial cancer at the 600\u00a0mg dose level and 1 patient with thyroid cancer at the 800\u00a0mg dose level. The Best tumor response from baseline in 93 evaluable study patients is shown in Fig.\u00a0Tmax of 3\u00a0h post-dose (range 1\u201324\u00a0h) on cycle 1\u00a0day 22. Minimal to no apparent accumulation in terms of day 22 AUC168 over day 1 AUC168 was observed following repeated QW dosing. The mean plasma terminal half-life (t1/2) of tovorafenib was approximately 70\u00a0h (range 31\u2013119\u00a0h) as defined in 20 evaluable patients receiving 600\u00a0mg QW. The relationship between dose and cycle 1\u00a0day 22 tovorafenib exposures (AUC168) is shown in Supplementary Figure S1. Steady-state exposures increased in an approximately dose-proportional manner over the 400\u00a0mg to 800\u00a0mg QW dose range with the 95% CI of the power model containing 1 (95% CI 0.55\u20132.04), with the coefficient of 1.30.\u00a0For QW dosing regimens, minimum drug accumulation was observed and the geometric mean Rauc (accumulation ratio based on AUC0-last) was in the range of 1.03\u20131.09. With the Q2D dosing regimen at 200\u00a0mg, the geometric mean value of Rauc was\u2009~\u20092.55.Mean (\u00b1\u2009standard deviation) plasma concentration\u2013time profiles of tovorafenib by QW dose group on days 1 and 22 of cycle 1 are shown in Fig.\u00a048 increased in an approximately dose-proportional manner over the dose ranges of 20\u00a0mg to 280\u00a0mg Q2D. While no apparent accumulation was observed with the QW dose regimens, Q2D administration resulted in approximately 2.5-fold accumulation in AUC48 at steady state.Similar PK analyses were carried out by Q2D dose group . Steady-state tovorafenib AUCBRAF mutation-positive treatment-na\u00efve cohort, BRAF mutation-positive previously treated cohort and NRAS mutation-positive treatment-na\u00efve cohort) and by quantitated image analysis (median percentage decrease\u2009\u2265\u200970% in the BRAF mutation-positive previously treated cohort and NRAS mutation-positive treatment-na\u00efve cohort), indicative of inhibition of RAF signaling. In the QW melanoma expansion cohort, the median level of pERK expression as assessed by both methods had decreased slightly by day 21 .In general, the median level of pERK staining in evaluable sample pairs from each of 5 melanoma Q2D expansion cohorts (Supplementary Table S3), was lower at day 21 than baseline as assessed by H-score by a pathologist of 16 patients in the setting , 28.Tmax of 2\u20134\u00a0h post-dose. Overall mean accumulation following 21\u00a0days of Q2D dosing was 2.5-fold. By contrast, QW dose administration was associated with minimal to no apparent accumulation of tovorafenib in systemic circulation in the dose range of 400\u00a0mg to 800\u00a0mg. Steady-state AUC increased in an approximately dose-proportional manner for both Q2D and QW dose ranges tested. The plasma terminal half-life (t1/2) of tovorafenib was approximately 70\u00a0h.The PK analyses showed that tovorafenib has a moderately fast absorption rate, with an overall median Cmax value reached for the QW MTD compared with the Q2D MTD. Preliminary exposure\u2013response analysis using data from both dosing regimens supported the selection of QW dosing for future clinical development as modeling and simulation results indicated that the marginal increase in efficacy associated with more frequent dosing was outweighed by an increase in the incidence of grade 3 rash along with other findings from exposure-adverse event and exposure-safety biomarker analyses [The QW dose escalation and expansion cohorts were introduced by protocol amendment as it was anticipated that higher unit doses would be possible on such a schedule, which would lead to higher tovorafenib concentrations for part of the treatment period. This proved to be the case, with a higher analyses , 30.RAF gene fusions, 2 had complete responses, 3 had partial responses and two achieved prolonged stable disease (NCT03429803). In the phase 1b part of this study, tovorafenib demonstrated clinically meaningful activity in 24 (69%) of 35 patients with MAPK pathway-altered cancers [BRAF alterations, including BRAF fusions and BRAF mutations (NCT04775485). An interim analysis of the first 25 enrolled patients with\u2009\u2265\u20096\u00a0months of follow-up showed encouraging antitumor activity with an overall response rate of 64% and a clinical benefit rate of 91%. Tovorafenib was generally well tolerated, with most adverse events being grade 1 or 2 [Weekly administration of tovorafenib as monotherapy has been further explored in a pediatric phase 1 study in patients with radiographically recurrent/progressive low-grade gliomas (LGGs) harboring MAPK pathway alterations . In the iseases) . Tovorafe 1 or 2 .RAF fusions, may be a more effective treatment approach than tovorafenib monotherapy in patients with tumors harboring other MAPK pathway alterations. Further, the randomized phase 3 LOGGIC/FIREFLY-2 study will evaluate the efficacy, safety, and tolerability of tovorafenib QW monotherapy versus standard of care chemotherapy in children and young adults with LGGs harboring an activating RAF alteration and requiring front-line systemic therapy (NCT05566795).Tovorafenib on a QW schedule is also currently being evaluated as monotherapy and in combination with other therapies in the phase 1b/2 FIRELIGHT-1 umbrella study in patients\u2009\u2265\u200912\u00a0years of age with recurrent, progressive, or refractory solid tumors harboring MAPK pathway aberrations (NCT04985604). In particular, given non-overlapping toxicity profiles, this study will explore combining tovorafenib with a MEK inhibitor, which outside the specific setting of tumors with 50 inhibition level. The preliminary indication of antitumor activity in BRAF-mutated melanoma is promising although further clinical development of single agent use in this setting in tumors that do not harbor RAF fusions is likely to be limited. However, tovorafenib in combination other MAPK pathway and non-MAPK pathway targeted agents should be further explored, with emerging data justifying continued clinical development across multiple settings.In conclusion, we have defined the MTD of tovorafenib for adults on Q2D and QW schedules. The dose expansion phase of our phase 1 study shows that the safety profile of tovorafenib is acceptable in both cases, and in line with other BRAF-targeted agents. Of note, tovorafenib appears to have antitumor activity in the setting of BRAF alterations without the clinical manifestations of paradoxical activation seen with type I BRAF inhibitors, such as the development of cutaneous squamous cell carcinoma or keratoacanthoma. In addition, there is evidence of MAPK pathway inhibition without the class effects seen with MEK inhibitors .The long plasma half-life of tovorafenib affords use with a QW dosing schedule, while still maintaining a steady state trough plasma concentration above the protein binding adjusted pERK ECSupplementary file1 (DOCX 320 KB)Below is the link to the electronic supplementary material."} +{"text": "Acute myeloid leukemia (AML) is one of the most common hematological malignancy that has a high recurrence rate. FIBP was reported to be highly expressed in multiple tumor types. However, its expression and role in acute myeloid leukemia remains largely unknown. The aim of this study was to clarify the role and value of FIBP in the diagnosis and prognosis, and to analyze its correlation with immune infiltration in acute myeloid leukemia by The Cancer Genome Atlas (TCGA) dataset. FIBP was highly expressed in AML samples compared to normal samples. The differentially expressed genes were identified between high and low expression of FIBP. The high FIBP expression group had poorer overall survival. FIBP was closely correlated with CD4, IL-10 and IL-2. The enrichment analysis indicated DEGs were mainly related to leukocyte migration, leukocyte cell\u2013cell adhesion, myeloid leukocyte differentiation, endothelial cell proliferation and T cell tolerance induction. FIBP expression has significant correlation with infiltrating levels of various immune cells. FIBP could be a potential targeted therapy and prognostic biomarker associated with immune infiltrates for AML.The online version contains supplementary material available at 10.1007/s12672-023-00723-1. Acute myeloid leukemia (AML) is the most common adult heterogeneous hematological malignancy that arises from clonal expansion of transformed hematopoietic stem and progenitor cells. It is associated with genomic alterations in cell proliferation and differentiation , 2. It hFGF1 intracellular binding protein (FIBP) has been reported to be an intracellular protein and could bind to the acidic fibroblast growth factor (aFGF), which participated in cell proliferation by stimulating mitogenesis , 6. FIBPThus, we evaluated the prognostic value of FIBP expression in AML based on TCGA data. We investigated FIBP expression and its correlation with survival in AML patients to understand pathological process and aggressiveness in AML. We further investigated the hub genes and the important role of FIBP in the immune microenvironment through protein-protein interaction network and immune infiltration analysis. This study was expected to provide new targets for AML precise treatment and potential application in predicting AML prognosis.https://xenabrowser.net/datapages/). AML clinical data were downloaded from TCGA database . Patients with insufficient clinical information were not included. The RNA-Seq gene expression FPKM (Fragments Per Kilobase per Million) of 151 cases with AML and clinical data were retained and further analyzed. The HTSeq-FPKM data were transformed to TPM (transcription per million reads) for the following analysis. The healthy subjects and AML patient blasts used for ex vivo experiments were obtained from peripheral blood or bone marrow samples collected from Changzhi People\u2019s Hospital, the Affiliated Hospital of Changzhi Medical College. The parents or guardians of each subject provided signed informed consent. The study protocol acquired approval from the ethics committee of Changzhi Medical College (No: RT2023001).The expression and clinical data of TCGA pan-cancer and GTEx data were downloaded from the UCSC Xena database were compared between high and low FIBP expression groups to identify differentially expressed genes (DEGs) using R Package DESeq2. |logFC|>1and FDR\u2009< 0.05 were considered as DEGs .The Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) enrichment analyses were analyzed for DEGs using the ggplot2 package for visualization and the cluster Profiler package for statistical analysis .The receiver operating characteristic (ROC) curve was used to assess the diagnostic value of FIBP in AML. The area value under the ROC curve is between 0.5 and 1. AUC in 0.5\u20130.7 has a low accuracy, AUC in 0.7\u20130.9 has a certain accuracy, and AUC above 0.9 has a high accuracy .The single sample gene set enrichment analysis (ssGSEA) method was performed using R package GSVA to analyze the immune infiltration of AML for 24 types of immune cells in tumor samples . The relFIBP, sense 5\u2032-TGAGCTGGACATCTTCGTGG-3\u2032, antisense 5\u2032- GGTCACCGAGTAACCATCGAG-3\u2032; GAPDH, sense 5\u2032-TCGTCCCGTAGACAAA ATGG-3\u2032, antisense 5\u2032-TTGAGGTCA ATGAAGGGGTC-3\u2032. For sample analysis, the threshold was set based on the exponential phase of products, and CT value for samples was determined. The resulting data were analyzed with the comparative CT method for relative gene expression quantification against GAPDH (house-keeping gene).The quantification of the expression of human genes was performed using real-time RT-PCR. The sequences of the primers used for detecting gene expression were as follows: Western blot assay was done as described previously . AntibodFIBP expression was explored in pan-cancer data from TCGA and GTEx. FIBP expression was significantly upregulated in 28 types of tumors than that in normal tissues, including BLCA, BRCA, CESC, CHOL, COAD, DLBC, ESCA, GBM, HNSC, KIRC, KIRP, LAML, LGG, LIHC, LUAD, LUSC, OV, PAAD, PCPG, PRAD, READ, SKCM, STAD, TGCT, THCA, THYM, UCEC, UCS (P\u2009<\u20090.05), while its expression was no significant difference between tumors and normal tissues including ACC, KICH, MESO, SARC and UVM Fig.\u00a0A. FIBP eThe differentially expressed genes (DEGs) were analyzed using TCGA cohort data and patients with LAML were divided into the high expression and the low expression group based on FIBP levels. A total of 720 differentially expressed genes were screened, including 411 upregulated genes and 309 downregulated genes , PB blasts p\u2009<\u20090.01), FAB classifications (p\u2009<\u20090.01) and Cytogenetic risk (p\u2009<\u20090.001). No correlation was found between FIBP expression and other clinicopathologic characteristics , p\u2009<\u20090.001) , which regulated cell proliferation for multiple cell types by stimulating mitogenesis or inducing morphological changes , 16. StuIn this study, bioinformatics analysis based on TCGA data demonstrated that the expression of FIBP was significantly higher in AML samples than normal samples, indicating that FIBP played a role in tumorigenesis and progression. In addition, ROC analysis showed that FIBP might be a potential diagnostic biomarker. The relationship between FIBP expression and clinicopathological factors was further explored, and high FIBP protein expression was significantly associated with age (p\u2009<\u20090.05), cytogenetic risk (p\u2009<\u20090.01), FAB classifications (p\u2009<\u20090.001), OS event (p\u2009<\u20090.001) and PB blasts (p\u2009<\u20090.05). Kaplan\u2013Meier survival analysis indicated that the high expression of FIBP was correlated with poorer overall survival times. Multivariate Cox regression analysis showed that FIBP was an independent prognostic factor affecting survival of AML patients (P\u2009<\u20090.001).To explore the biological functions of FIBP, DEGs were analyzed based on AML patients with high or low FIBP expression from TCGA data. A total of 720 differentially expressed genes were identified and the functional enrichment analysis of these DEGs was performed in AML samples. The results demonstrated that these DEGs were mainly enriched in BP terms associated with leukocyte migration, extracellular matrix organization, signal release, leukocyte cell\u2013cell adhesion, regulation of blood circulation, tissue remodeling, leukocyte chemotaxis, myeloid leukocyte differentiation, endothelial cell proliferation, granulocyte migration, positive regulation of endothelial cell proliferation, lymphocyte apoptotic process and T cell tolerance induction. MF was primarily involved in G protein-coupled receptor binding, cytokine activity, cytokine receptor binding, growth factor binding, cytokine receptor activity and extracellular matrix binding. It has been reported that Interactions between AML blasts and their adjacent endothelial cells in the bone marrow microenvironment were important for chemotherapy sensitivity , 22. MorAML is highly dependent on the immune microenvironment for survival and growth , 25. TheIn conclusion, these findings in this study determined FIBP may be a potential poor prognostic biomarker, which could aid clinicians in clinical application, assessment and therapeutics for AML. Future researches are required to include experiments in vivo and in vitro and enroll more patients to further verify these conclusions.Supplementary material 1"} +{"text": "Scientific Reports 10.1038/s41598-020-77013-1, published online 18 November 2020Correction to: In the original version of this Article, Affiliation 1 was incorrectly given as \u2018Department of Internal Medicine, Hospital Universitari Germans Trias i Pujol, Universitat Aut\u00f2noma de Barcelona, 08916 Badalona, Spain\u2019.The correct affiliations are listed below:1. Department of Internal Medicine, Hospital Universitari Germans Trias i Pujol,\u00a008916, Badalona, Spain.2. Department of Medicine, Universitat Aut\u00f2noma de Barcelona, 08035, Barcelona, Spain.As a result, the Affiliations have been renumbered.The original Article has been corrected."} +{"text": "ObjectiveEarly onset neonatal sepsis (EONS) remains a significant cause of morbidity and mortality in newborns in the immediate postnatal period. High empiric antibiotic use in well-appearing infants with known risk factors for sepsis led the American Academy of Pediatrics (AAP) to revise its 2010 guidelines for the evaluation and management of EONS to avoid overuse of antibiotics. In this recent clinical report, the AAP provided a framework that outlined several evidence-based approaches for sepsis risk assessment in newborns that can be adopted by institutions based on local resources and structure. One of these approaches, the sepsis risk calculator (SRC) developed by Kaiser Permanente, has been widely\u00a0validated for reducing unnecessary antibiotic exposure and blood work in infants suspected of having EONS. In order to determine the utility and safety of modifying our institution's protocol to the SRC, we implemented a two-phased approach to evaluate the use of SRC in our newborn nursery. Phase 1 utilized a retrospective review of cases with SRC superimposition. If results from Phase 1 were found to be favorable, Phase 2 initiated a trial of the SRC for a six-month period prior to complete implementation.MethodsPhase 1 consisted of retrospectively applying the SRC\u00a0to electronic medical records (EMR) of infants \u2265 35 weeks\u2019 gestational age admitted to the newborn nursery with risk factors for EONS between June 2016 and May 2017. We compared actual antibiotic use as determined by the unit's EONS protocol for evaluation and management based on 2010 Centers for Disease Control and Prevention (CDC) and AAP guidelines to SRC-recommended antibiotic use. We used EMR to determine maternal and infant data, blood work results, and antibiotic usage as well as\u00a0used daily progress notes by the clinical team to determine the clinical status of the infants retrospectively. Based on the projected reduction in blood work and antibiotics use with the retrospective superimposition of SRC on this cohort of infants and identification of our high-risk patient subset, we developed a novel, hybrid EONS protocol that we implemented and assessed throughout Phase 2, a six-month period from August 2018 to January 2019, as a prospective observational study.ResultsPhase 1 (SRC superimposition) demonstrated that the use of the SRC would have reduced empiric antibiotic use from 56% to 13% in the study cohort when compared with 2010 CDC/AAP guidelines. However, these same findings revealed use of the SRC would have resulted in delayed evaluation and initiation of antibiotics\u00a0in 2 of 4 chorioamnionitis-exposed infants with positive blood cultures. During Phase 2 (n=302), with the implementation of our tailored approach , 12 (4%) neonates received empiric antibiotic treatment compared to nine (3%) neonates who would have been treated per strict adherence to SRC recommendations. No neonate had culture-positive EONS. Continued use of 2010 CDC/AAP guidelines would have led to empiric antibiotic use in 38 (12.6%) infants in this cohort.ConclusionWe developed a novel hybrid approach to the evaluation and management of neonates at increased risk of EONS by tailoring SRC recommendations to our safety-net population. Our stewardship effort achieved a safe and significant reduction in antibiotic usage compared to prior usage determined using CDC/AAP guidelines. Early onset neonatal sepsis (EONS) remains a significant cause of morbidity and mortality in newborns in the immediate postnatal period despite a steady decline in its incidence over the past three decades . EONS isRobust surveillance of epidemiological and outcomes data has led to the subsequent revision of these E&M guidelines (in 2002 and again in 2010) . These gSeveral attempts to address these concerns of overtreatment in healthy, unaffected infants were dampened by the high risk of mortality in untreated or missed cases of EONS in this fragile population . One innAs a result, the 2018 revision by the CDC/AAP of its 2010 guidelines provided a framework of several evidence-based approaches for sepsis risk assessment in newborns that can be adopted by institutions based on local resources and structure. Facilities can choose from one of the aforementioned approaches or continue their use of the 2010 CDC/AAP guidelines in consideration of population demographics and resources available to the mother-baby unit .Several centers have subsequently modified their institution's approach to the evaluation and management of EONS based on these recommendations by either adopting one of the two approaches or modifying their existing approach within the framework of the evidence-based recommendations by CDC/AAP 2018) including recommendations from centers that reported safe implementation of the SRC with reductions of laboratory evaluations and antibiotic treatment of newborns -17. Howe includinFor this study, the 2010 guidelines formed the protocol of care for E&M of infants admitted to the newborn nursery (NBN) in a teaching hospital in Northeast (NE) Florida. In order to determine the utility and safety of modifying our institution\u2019s current protocol (based on 2010 CDC/AAP guidelines) for EONS evaluation and management to the SRC, we implemented a two-phased approach to evaluate the SRC in our newborn nursery. Phase 1 utilized a retrospective review of cases with SRC superimposition. If results from Phase 1 were found to be favorable, results from Phase 1 would then be used to inform implementation of Phase 2 - a prospective trial of SRC in the NBN for a six-month period with the goal of safely reducing empiric antibiotic use in our high-risk population of infants \u2265 35 weeks\u2019 gestational age admitted to the mother-baby unit.SettingWe conducted our sequential, two-phase project in the NBN at the University of Florida (UF) Health Jacksonville. This facility is an academic, safety-net hospital in NE Florida that serves as the primary teaching site for the UF College of Medicine Pediatric residency program. This study was conducted as part of an internet-based, multi-site, national quality improvement (QI) collaborative organized by the Vermont Oxford Network (VON) called \"Choosing Antibiotics Wisely\" in which our institution was a participating site. It was approved by the University of Florida's Institutional Review Board .Study design/methodPhase 1: Retrospective Superimposition of the SRCPhase 1 was planned and conducted with the goal of evaluating if applying SRC to the electronic medical records (EMR) of infants, originally evaluated using our NBN's standard protocol of care for EONS (based on 2010 CDC/AAP guidelines), would decrease the rate of blood work and empiric antibiotic initiation and by how much. This was an important first step in our antibiotic stewardship effort before making any changes to our existing protocol. Criteria for inclusion to Phase 1 were, inborn infants admitted to the NBN, born at a gestational age of \u2265 35 weeks\u2019\u00a0with one or more of the validated risk factors for EONS including positive or unknown maternal GBS status, chorioamnionitis, rupture of membranes > 18 hours, gestational age < 37 weeks,\u00a0and who had a blood culture obtained for the evaluation of EONS. Out-born infants\u00a0were excluded from Phase 1.\u00a0We retrospectively identified from the EMR inborn infants admitted to the NBN over a period of 1 year (from June 2016 to May 2017) who met Phase 1 inclusion criteria. We ensured that infants\u00a0in this cohort had one or more conditions that CDC/AAP had incorporated into their algorithm as risk factors for EONS ,\u00a0and a blood culture had been sent for evaluation of EONS. We then applied the SRC for evaluation and management\u00a0of EONS to the same infant records and compared the SRC recommendations with the recommendations that were originally used in the management of these infants (based on 2010 CDC/AAP guidelines). In doing so, we captured information on the actual and the SRC-recommended management including antibiotic usage, lab workup, or clinical observation, and identified all infants with positive blood cultures.Data from Phase 1 was obtained and analyzed. An evidence-based consensus was obtained for the evaluation and management of EONS for Phase 2 after several deliberations to achieve buy-in\u00a0from all stakeholders including neonatologists, pediatric hospitalists, and private pediatricians who admitted infants to our NBN as well as from nurses and leadership for Phase 2 protocol. This was followed by multiple educational sessions for all providers and nurses before implementation of Phase 2 .\u00a0Phase 2: Prospective Application of Hybrid SRCResults from Phase 1 informed Phase 2 to establish our new institutional EONS protocol. Given our hospital resources and our unique population characteristics, we made a detailed assessment of Phase 1 results, anticipating that we might either fully adopt the SRC or, alternatively, create a hybrid regimen for E&M that would better serve our high-risk population and address any safety concerns identified in Phase 1. We then carried out the observational portion of our study and prospectively evaluated an adapted SRC protocol over a six-month period (from August 2018 to January 2019). This adapted protocol was used for all infants that met\u00a0inclusion criteria for Phase-2 which were, inborn infants admitted to the NBN, born at a gestational age of \u2265 35 weeks with one or more of the validated\u00a0risk factors for\u00a0EONS .\u00a0We excluded infants that were out-born from Phase 2. The recommendations for management obtained from the SRC were noted which included either routine observation, close clinical monitoring with more frequent clinical evaluations and vital sign monitoring,\u00a0obtaining lab work without initiating antibiotics, or starting empiric antibiotics after obtaining lab work. Additionally, we were mindful to note observed changes in the recommendation for management by the SRC for infants that had a change in clinical status without validated risk factors for sepsis.https://neonatalsepsiscalculator.kaiserpermanente.org/). All data obtained were stored in the pediatric department\u2019s double password-protected research drive with access granted only to the study team members.We used EPIC to extract maternal and infant demographic data and laboratory results pertaining to evaluation for sepsis. We completed an assessment of each infant\u2019s clinical condition based on the assessment documented in the daily progress notes by the caregiver team. Data on each infant were entered into the SRC using Kaiser Permanente\u2019s open-access web application and means, standard deviations, medians, and quartiles (continuous variables). We compared actual and SRC-recommended rates of antibiotic use using McNemar\u2019s test for paired data. We compared continuous data using Wilcoxon rank sum tests and Fisher\u2019s exact test for categorical data. In all cases, the level of significance was set at 0.05. All analyses were done in SAS\u00ae for Windows Version 9.4 .Phase 1: retrospective studyOf 3215 total live births at UF Health from June 1, 2016, to May 31, 2017, 2850 infants were born at a gestational age \u2265 35 weeks. In accordance with the 2010 CDC/AAP guidelines, 175 infants admitted to the newborn nursery had a blood culture obtained and 98 (56%) of these were started on antibiotics at some time during the newborn nursery hospitalization. Characteristics of infants and mothers are summarized in Table On superimposition of the SRC to the portion of the infant cohort that had laboratory workup and/or were started on antibiotics in Phase 1, the SRC recommended laboratory evaluation and initiation of antibiotics in significantly fewer infants . In other words, the SRC would have recommended observation without empiric antibiotics in 87% of these infants. We found no difference in white blood cell (WBC) count between infants who received and who did not receive antibiotics (p = 0.94) as well as between those with positive and negative blood cultures for 22 hours. Blood culture grew Phase 2: prospective use of a modified approach (hybrid SRC) to the evaluation and management of EONSFrom June 2017 through July 2018, we abstracted and analyzed Phase 1 data to develop an evidence-based consensus for the evaluation and management of EONS in Phase 2. Based on the Phase 1 incidents where\u00a0two well-appearing infants with positive blood cultures failed to elicit SRC recommendations to perform laboratory evaluation and treatment, our modified approach would require laboratory evaluation (blood culture) for all infants born to mothers with chorioamnionitis irrespective of SRC recommendations but would follow SRC recommendations for initiation of empiric antibiotics.\u00a0SRC recommendations for evaluation and management were adopted for all other infants. During Phase 2, out of 1547 total live births between August 1, 2018, to January 31, 2019 (1314 infants born \u226535 weeks gestation), we identified 302 live-born infants who had one or more risk factors for EONS. Infant and maternal characteristics are summarized in Table\u00a0E. coli and Streptococcus viridians,\u00a0respectively. One infant was exposed to maternal chorioamnionitis and the other infant was delivered after prolonged ROM for 5 days. These infants had appropriate recommendations for obtaining blood cultures and starting empiric antibiotics at birth with the application of the SRC. No blood culture-positive infants were missed with strict adherence to the SRC which now forms the standard of care in our NBN for evaluation and management of EONS in infants \u2265 35 weeks gestation at birth.Regular six-month audits were performed to identify any cases of missed diagnosis of culture-positive EONS following a change in our practice and transition to the SRC, as part of the unit's routine quality assurance process. Two infants with culture-positive sepsis were identified by the SRC between February 2019 and January 2020 audits. Both infants were full term and blood culture was positive for Application of the 2010 CDC/AAP guidelines for the evaluation and management of EONS to the population of infants \u2265 35 weeks\u2019 gestational age admitted to our safety-net newborn nursery has historically resulted in high rates of laboratory evaluation and antibiotic usage, due primarily to high maternal incidences of prolonged rupture of maternal membranes and/or clinical chorioamnionitis. Similar findings have also been reported at other centers ,19. The During Phase 1, we retrospectively compared the rate of antibiotic treatment of a population of infants who had full laboratory evaluation for sepsis per the 2010 CDC/AAP guideline with the rate that would have resulted from SRC recommendations. Analysis of this highest-risk group confirmed that adoption of the SRC would have reduced the rate of initiation of antibiotics from 56% to 13%, but that two well-appearing infants would have experienced delay in evaluation and treatment. As a result, we implemented a modified approach in Phase 2 in which we obtained blood cultures in all high-risk infants born to mothers with chorioamnionitis paired with SRC recommendations for initiation of empiric antibiotics and used SRC recommendations for evaluation and management in all other infants. Other centers that have revised their approach to the evaluation and management of EONS have adopted a similar phased approach in consideration of their populations until they acquired sufficient prospective data to allow robust insight into generalizability ,24.We tailored our modified approach to our unique patient cohort as well as to the resources of our hospital\u2019s mother-baby unit. It aligned well with the CDC/AAP\u2019s EONS policy update published in Dec 2018, which allowed centers three options for the evaluation and management of EONS . While mEach of the three approaches to initiation of treatment includes careful serial clinical assessments. Due to the development of clinical signs of sepsis after birth transition, 8% of infants in Phase 1 and 1% of infants in Phase 2 had blood cultures drawn after 12 hours of life with subsequent initiation of antibiotics. In deciding an approach to the evaluation and management of EONS, it is important to consider individual hospital resources, unique population factors, workflow constraints, and overall safety. It is equally important to be aware of each unit\u2019s baseline EONS risk and to track outcome data to prevent oversights and safety mishaps.Our modified approach did lead to a safe reduction in antibiotic utilization in Phase 2 to 4% from what would have been a rate of 12.6% had we strictly applied the 2010 CDC/AAP guideline. This outcome is in accord with what other centers have reported as decreased antibiotic usage for healthy late preterm and term infants following the adoption of the SRC -17,26,27Regardless of the approach adopted by a center, there will always be a possibility of missing infants with EONS. Some infants will develop EONS when a multivariate assessment defines low risk; other infants may develop EONS that is not treated promptly because of lack of timely recognition of clinical signs of sepsis. Each predictive model will be subject to error . As emphThe major strengths of our study are the high-risk status of the study population and the sequential data-driven approach to achieving a consensus on a modified approach to the evaluation and management of EONS that was highly likely to reduce antibiotic utilization in a safe way. However, our study has some limitations as well. Phase 1 was a retrospective study that had the additional complexity of investigators having to make a post hoc inference about the clinical status of each infant based on the archived daily progress notes. During Phase 2, no infant developed EONS. Hence, we could not evaluate whether treatment was timely; however, the modified approach did guarantee that evaluation was accomplished in a timeframe that was equal to or faster than would have occurred solely based on the SRC.Given that any prediction model is imperfect, it is clear that serial clinical examinations increase safety. Although reducing unnecessary use of antibiotics was the primary aim of our antibiotic stewardship efforts, we aimed to balance this against the safety concerns of our population. After each change in clinical practice, it is imperative that clinicians closely track and review outcomes and adverse events and adjust clinical protocols accordingly. Based on the outcomes and safety data from our study (Phase 1 and 2) and audit results, starting January 2020, our unit has successfully transitioned to completely adopting the SRC for evaluation and management of EONS in all infants admitted to our newborn nursery without obtaining additional blood culture in our high-risk, maternal chorioamnionitis exposed infants anymore and continue to perform six-month audits to evaluate for outcomes and safety data.We developed a novel hybrid approach to the E&M of neonates at increased risk of EONS by tailoring sepsis risk calculator recommendations to our safety-net population. Our stewardship efforts achieved a safe and significant reduction in antibiotic usage compared to our prior newborn nursery protocol based on recommendations from 2010 CDC/AAP guidelines. More studies are needed from different centers adopting any of the three approaches outlined in the 2018 CDC/AAP EONS policy update for early onset neonatal sepsis evaluation and management suited to their unique patient demographics and available hospital recourses. Regardless of the approach adopted, frequent audits to monitor outcomes and adjust clinical protocols accordingly are an essential part of any change in clinical practice." \ No newline at end of file