diff --git "a/deduped/dedup_0190.jsonl" "b/deduped/dedup_0190.jsonl" new file mode 100644--- /dev/null +++ "b/deduped/dedup_0190.jsonl" @@ -0,0 +1,37 @@ +{"text": "A prime requirement in any controlled study is that as far as possible, all conditions apart from the one being tested should be the same.In the Auvert study, the men from the intervention group were instructed, in effect, as follows: \u201cWhen you are circumcised you will be asked to have no sexual contact in the six weeks after surgery. To have sexual contact before the skin of your penis is completely healed could lead to infection if your partner is infected with a sexually transmitted disease. It could also be painful and lead to bleeding. If you desire to have sexual contact in the six weeks after surgery, despite our recommendation, it is absolutely essential that you use a condom\u201d (Text S3 in ).So the men in the intervention group were given very different instructions about sexual behaviour than those in the control group\u2014in precisely the field where their risk of HIV infection was most affected. This could have differentially affected their sexual behaviour, and perhaps how they reported it. The time they spent waiting for and recovering from their surgery could also have exposed them to more safe-sex information and influence than the control group.The control group was given no medical intervention at all. The study would have come closer to reaching equivalence between the two groups if a placebo surgery had been performed on the penis, such as opening and suturing an annular incision on the shaft, but leaving the foreskin, the supposed portal of HIV infection. The control group would then have needed identical instructions to those given to the intervention group; then, the two groups would have had much more equivalent risk."} +{"text": "We also performed molecular characterisation, including DNA fingerprinting analysis and abnormalities of K-ras, p15, p16, p53, hMLH1,hMSH2, DPC4, \u03b2-catenin, E-cadherin, hOGG1, STK11, and TGF-\u03b2RII genes by PCR\u2013SSCP and sequencing analysis. In addition, we compared the genetic alterations in tumour cell lines and their corresponding tumour tissues. All lines grew as adherent cells. Population doubling times varied from 48\u201372\u2009h. The culture success rate was 20% (six out of 30 attempts). All cell lines showed (i) relatively high viability; (ii) absence of mycoplasma or bacteria contamination; and (iii) genetic heterogeneity by DNA fingerprinting analysis. Among the lines, three lines had p53 mutations; and homozygous deletions in both p16 and p15 genes were found three and three lines, respectively; one line had a heterozygous missense mutation in hMLH1; E-cadherin gene was hypermethylated in two lines. Since the establishment of biliary tract cancer cell lines has been rarely reported in the literature, these newly established and well characterised biliary tract cancer cell lines would be very useful for studying the biology of biliary tract cancers, particularly those related to hypermethylation of E-cadherin gene in biliary tract cancer.Human cell lines established from biliary tract cancers are rare, and only five have been reported previously. We report the characterisation of six new six biliary tract cancer cell lines established from primary tumour samples of Korean patients. The cell lines were isolated from two extrahepatic bile duct cancers , two adenocarcinomas of ampulla of Vater, one intrahepatic bile duct cancer (cholangiocarcinoma), and one adenocarcinoma of the gall bladder. The cell phenotypes, including the histopathology of the primary tumours and British Journal of Cancer (2002) 37, 187\u2013193. doi:10.1038/sj.bjc.6600440www.bjcancer.comCancer Research UK\u00a9 2002 We also checked genetic alterations of K-ras, p15, p16, p53, hMLH1,hMSH2, DPC4, STK11, E-cadherin, hOGG1, TGF-\u03b2RII genes and compared the genetic alterations in tumour cell lines and their corresponding tumour tissues. In these biliary tract cancer cell lines, the methylation status of promoter region in E-cadherin gene was also investigated by 5-aza-2\u2032-deoxycytidine treatment and methylation specific-polymerase chain reaction (MS-PCR) after sodium bisulphite treatment.The prognosis of patients with biliary tract cancer is poor despite recent advances in diagnostic and therapeutic techniques and SNU-1 cell line from Korean Cell Line Bank (KCLB) were used for PCR controls. For growth properties and morphology study in vitro, population doubling times and cell viability were determined and cells grown on 75-cm2 culture flasks were observed daily by phase-contrast microscopy . After eGenomic DNA and total RNA were isolated from washed-cell pellets. cDNA was synthesised according to the manufacturer's specifications using 2\u2009\u03bcg of total RNA. To compare the genetic alterations between established tumour cell lines and their corresponding tumour tissues, we obtained DNAs from microdissected tumour cells and stromal cells in H&E stained slides of corresponding tumour tissues of each tumour cell line. Approximately 500\u20131000 dissected tumour or stromal cells were digested using proteinase K method described previously analysis . A minimum of 10 individual clones were then pooled and used for DNA isolation. Bi-directional DNA sequencing analysis performed by using dideoxy chain termination method with a T7 DNA polymerase sequencing kit , or directly sequenced using a Taq dideoxy terminator cycle sequencing kit on an ABI 377 DNA sequencer .Population doubling times ranged 48 to 72\u2009h, and cell viability ranged from 85 to 94% showed that six biliary tract carcinoma cell lines are unique and unrelated at 12\u2009bp downstream . By based deletion analysis of each gene, we found that exon 1 of p15 gene was not amplified in three cell lines and exon 2 of this gene was not amplified in two cell lines (SNU-478 and SNU-119). We also found that exons 1 and 2 of p16 gene were not amplified in three cell lines .In -catenin, DPC4, STK11, and TGF-\u03b2RII genes by PCR-SSCP analysis. In hOGG1 gene, 4 lines were found to harbour abnormal band shift bands in exon 5. By the direct sequencing of DNA fragments corresponding to shifted bands, a C\u2192G nucleotide change at \u221215\u2009bp from exon 5 was found in all four lines cell lines showed absence of expression cell lines were available. Tumour cells and stromal cells were dissected, respectively, and DNAs were extracted from these samples. Genetic alterations in DNA of tumour cells were identical to those that had found in tumour cell lines and there were no mutations in constitutional DNAs (data not shown).Advances in cell culture methods have made it possible to establish a variety of human carcinoma cell lines from surgical and autopsy tissues, peritoneum effusion, and biopsy specimens . In SNU-1079, exon 1 of p15 gene was deleted and exon 2 was intact. These results indicate that p15 and p16 are homozygously deleted preferentially in the region neighbouring the two genes in biliary tract cancer cell lines. Kim et al (2001) reported that 30.7% of this gene was mutated in gall bladder carcinomas from Korean. Moreover, the high frequency of deletions of p15 and p16 in cell lines indicate the possibility of these genes functioning as tumour suppressor genes in biliary tract cancer as described previously (hMLH1 gene in one cell line (SNU-478). We screened the microsatellite instability (MSI) status in this line using several MSI markers such as BAT-25, BAT-26 and BAT-40 which were used as a surrogate to indicate MSI status, but failed to find evidence of MSI . In addition, we did not find genetic alterations in TGF-\u03b2RII gene by PCR-SSCP and RT\u2013PCR analysis, although genetic alterations of the TGF-\u03b2RII gene in biliary tract cancers have been reported in some (DPC4 and TGF-\u03b2RII genes in biliary tract cancers in Korean.A significant number of human carcinomas and cancer cell lines lose sensitivity to negative growth regulation by transforming growth factor \u03b2 (TGF-\u03b2) by RT\u2013PCR analysis analysis (E-cadherin gene in biliary tract cancers.analysis . After dll lines . Hypermeanalysis . These rE-cadherin gene in biliary tract cancers.Since the establishment of biliary tract cancer cell lines has been rarely reported in the literature, these well-characterised biliary tract cancer cell lines will be useful tools for investigating the biological characteristics of biliary tract cancer, especially those related to the hypermethylation of"} +{"text": "Alterations in epigenetic marks, including methylation or acetylation, are common in human cancers. For many epigenetic pathways, however, direct measures of activity are unknown, making their role in various cancers difficult to assess. Gene expression signatures facilitate the examination of patterns of epigenetic pathway activation across and within human cancer types allowing better understanding of the relationships between these pathways.We used Bayesian regression to generate gene expression signatures from normal epithelial cells before and after epigenetic pathway activation. Signatures were applied to datasets from TCGA, GEO, CaArray, ArrayExpress, and the cancer cell line encyclopedia. For TCGA data, signature results were correlated with copy number variation and DNA methylation changes. GSEA was used to identify biologic pathways related to the signatures.enhancer of zeste homolog 2(EZH2), histone deacetylase(HDAC) 1, HDAC4, sirtuin 1(SIRT1), and DNA methyltransferase 2(DNMT2). By applying these signatures to data from cancer cell lines and tumors in large public repositories, we identify those cancers that have the highest and lowest activation of each of these pathways. Highest EZH2 activation is seen in neuroblastoma, hepatocellular carcinoma, small cell lung cancer, and melanoma, while highest HDAC activity is seen in pharyngeal cancer, kidney cancer, and pancreatic cancer. Across all datasets studied, activation of both EZH2 and HDAC4 is significantly underrepresented. Using breast cancer and glioblastoma as examples to examine intrinsic subtypes of particular cancers, EZH2 activation was highest in luminal breast cancers and proneural glioblastomas, while HDAC4 activation was highest in basal breast cancer and mesenchymal glioblastoma. EZH2 and HDAC4 activation are associated with particular chromosome abnormalities: EZH2 activation with aberrations in genes from the TGF and phosphatidylinositol pathways and HDAC4 activation with aberrations in inflammatory and chemokine related genes.We developed and validated signatures reflecting downstream effects of Gene expression patterns can reveal the activation level of epigenetic pathways. Epigenetic pathways define biologically relevant subsets of human cancers. EZH2 activation and HDAC4 activation correlate with growth factor signaling and inflammation, respectively, and represent two distinct states for cancer cells. This understanding may allow us to identify targetable drivers in these cancer subsets. Epigenetic changes beyond DNA methylation have been recently recognized as important in human cancers . These eHistone acetylation and methylation are altered in multiple cancers, usually with increased histone deacetylation and methylation . Two HDAIn this study, we use gene expression patterns to explore the activation of various epigenetic pathways across human cancers. We capture the acute downstream consequences of gene deregulation by isolating RNA directly after a given pathway has been activated and then performing gene expression analysis. We use mRNA to measure the acute changes in gene transcription, which integrates all of the signaling effects of an enzyme. For epigenetic enzymes, these effects can include modification of both histones and other proteins by acetylation, methylation and phosphorylation. Coupling of the signaling pathways to transcriptional responses is a sensitive and accurate reflection of overall pathway activity . We deveWe used human mammary epithelial cell (HMEC) cultures to develop the epigenetic pathway signatures, as these cells have been used previously to generate robust pathway signatures that are accurate across tissue and cancer types . The HMEBefore statistical modeling, gene expression data were filtered to exclude probe sets with signals present at low levels and for probe sets that did not vary significantly across samples. A Bayesian binary regression algorithm was then used to generate multigene signatures that distinguish activated cells from controls . Detailein silico validation analysis was performed using external and independently generated datasets with known pathway activation status based on biochemical measurements of protein knockdown , inhibitor treatment , or activator treatment . A pathway signature\u2019s ability to correctly predict pathway status in these datasets was used to validate the accuracy of the genomic model.To validate pathway signatures, two types of analyses were performed. First, a leave-one-out cross validation (LOOCV) was used to confirm the robustness of each signature to distinguish between the two phenotypic states,GFP versus pathway activation. Model parameters were chosen to optimize the LOOCV and then fixed. Secondly, an Publically available datasets from Gene Expression Omnibus (GEO) and Arrahttp://io.genetics.utah.edu/files/bildres/Epigenetics/. All pathway analyses were performed in R version 2.7.2 or MATLAB. Survival analyses were performed using Cox proportional hazards regression with pathway activation as a continuous variable (http://www.statpages.org/prophaz.html).The statistical methods used here to develop gene expression signatures of pathway activity have been previously described and are http://www.broadinstitute.org/gsea) . Br. Br4]. BIn order to investigate epigenetic signaling pathways in cancer, we created a panel of gene expression signatures that model histone methylation (EZH2 signature), histone deacetylation by class 1 (HDAC1 signature), class 2 (HDAC4 signature), and class 3 (SIRT1 signature) histone deacetylases, and RNA methylation (DNMT2 signature). . Over 40 different cancer types are represented, enabling comparisons across cancer type. In all analyses, pathway predictions for replicate samples were averaged. Some cancer types have wide variation in pathway activation, while others have more consistency within cancer type. Strikingly, cancer types with high EZH2 activation consistently also have low HDAC4 activation . This pattern of mutually exclusive and inverse pathway activity was confirmed in a larger dataset of over 900 cell lines from the Cancer Cell Line Encyclopedia [We first examined the pattern of epigenetic pathway activation across two independent panels of cancer cell lines Figures\u00a0A- D. The\u20090.0001) . SpecifiMany of our cell line results are consistent with published research. For example, neuroblastoma has been shown to have high EZH2 activity and to rely on this activity for survival ,14. In aTo investigate pathway activity in actual patient tumors, we then projected the signatures onto a dataset of primary tumor and normal samples and other public tumor datasets. Breast cancer subtypes have been well described ,33. GlioAlthough initially our results may seem to contradict other reports that EZH2 is overexpressed in basal breast cancers compared to luminal cancers, there are areas of agreement ,37. EZH2-5, 7.6\u2009\u00d7\u200910-3, 7.97\u2009\u00d7\u200910-19, and 8.04\u2009\u00d7\u200910-22, respectively). DNMT2 activation was relatively lower in the Mesenchymal and Neural subtypes compared to the others (p\u2009=\u20092.8\u2009\u00d7\u200910-7). Of those GBMs with high EZH2 and high HDAC1 activation, 58% are Proneural, while 73% of GBM with high HDAC4 and SIRT1 activation are Mesenchymal. Although these pathways have not been assessed directly within GBM subtypes before, our results are consistent with the finding that EZH2 expression is highest in secondary GBM, which tend to be Proneural, rather than primary GBM [Similarly, epigenetic pathway activation varied among GBM subtypes Figure\u00a0B. Again,mary GBM .To assess the potential clinical significance of epigenetic pathway activation, we assessed whether EZH2 activation or HDAC4 activation predicted prognosis in our metadataset of breast cancer or TCGA data of GBM. EZH2 activation was prognostic in neither cancer. HDAC4 activation was not prognostic in breast cancer overall, but higher HDAC4 activation predicted better prognosis when looking within the HER2-enriched and luminal B subtypes . . This exclusion is consistent across cancers of all types, locations, and stages. This relationship is not simply a mathematical artifact of the formulas for the two signatures because it is not seen when the signatures are applied to non-biologically meaningful samples, such as microarrays run on degraded RNA . Representative loci are shown in Figure\u00a0http://gather.genome.duke.edu) to assess GO and KEGG pathways. Thus, the GSEA results matched the copy-number results, indicating that HDAC4 activation and EZH2 inactivation are associated with increased activation of cytokine and immune-related pathways. These connections between HDAC4 activation and inflammatory cytokines match the cancer subtype results. For example, basal breast cancers, which we found to have high HDAC4 activation, are known to have higher levels of tumor-infiltrating macrophages and higher chemokine receptor expression than luminal cancers [In addition to leveraging copy number data, we applied GSEA to the gene-expression data used to generate the EZH2 and HDAC4 signatures to identify pathways associated with either EZH2 activation or HDAC4 activation in the signature samples. EZH2 activation was associated with TGF-beta signaling, phosphatidylinositol binding, and negative regulation of MAPK Figure\u00a0C. HDAC4 cancers ,44. Mese cancers . Alterna cancers .Lastly, we used DNA methylation data to investigate further the differences between EZH2 high/HDAC4 low and EZH2 low/HDAC4 high tumors. We identified genes that are differentially methylated between the two groups in the TCGA GBM and breast datasets. With a false discovery rate less than 5%, gene ontology analysis showed that genes with decreased methylation in EZH2 low/HDAC4 high GBM were enriched for T-cell activation (Bayes Factor 5.5). In breast cancer, EZH2 high/HDAC4 low had increased methylation of TNFRSF10D, a stimulator of inflammatory pathways including NF-\u03baB. Thus, the methylation data also show that expression of genes in inflammatory signaling pathways is higher in tumors with high HDAC4 activation than in tumors with high EZH2 activation.Using genome-wide gene expression signatures, we have mapped patterns of epigenetic pathway activation in large panels of tumors, enabling discrimination of patterns across and within cancer phenotypes. Looking broadly across all cancers, our results highlight that EZH2 is active in more primitive cancers of childhood, and HDAC4 is active in more mature adenocarcinomas and squamous cell carcinomas. Our analysis indicates two distinct and mutually exclusive types of cancers, one associated with a gene expression pattern of EZH2 activation and tyrosine kinase signaling and the other with HDAC4 activation with increased cytokine signaling and immune cell infiltration. Looking within cancers, epigenetic pathways highlight differences between subtypes of a cancer and similarities between subtypes of different cancers. In particular, EZH2 activation is seen in luminal breast cancers and proneural GBM, while HDAC4 activation is seen in basal breast cancers and mesenchymal GBM. These results raise the possibility for a histology-independent categorization of cancers using epigenetic pathways. Further studies are needed to elucidate the mechanisms for the mutual exclusiveness of EZH2 and HDAC4 and to determine therapeutic targets for the distinct epigenetic-specific cancer phenotypes.HAT: Histone acetyltransferase; HDAC: Histone deacetylase; EZH2: Enhancer of zeste homolog 2; NSCLC: Non-small cell lung cancer; GBM: Glioblastoma; HMEC: Human mammary epithelial cell; GFP: Green fluorescent protein; SVD: Singular value decomposition; LOOCV: Leave-one-out cross-validation; GEO: Gene expression omnibus; GSEA: Gene set enrichment analysis; TCGA: The cancer genome atlas; GO: Gene ontology; GSK: Glaxo-Smith-Kline; CCLE: Cancer cell line encyclopedia; SCLC: Small cell lung cancer.The author\u2019s declare that they have no competing interests.AC performed all bioinformatic analyses, obtained and analyzed datasets, and drafted the manuscript. SP provided software, assisted with analysis, and helped with the manuscript. LC performed western blots. RS performed cell culture, prepared viruses, and confirmed virus transfection. BH performed analysis of methylation data from TCGA. WEJ supervised the methylation analysis, performed some statistical analyses, and helped with the manuscript. AHB conceived of the study, prepared the cells and viruses for the signature, assisted with data interpretation, and helped draft the manuscript. All authors read and approved the final manuscript.The pre-publication history for this paper can be accessed here:http://www.biomedcentral.com/1755-8794/6/35/prepubWestern blot of HMECs infected with viruses expressing epigenetic pathway proteins.Click here for fileSupplementary methods with detailed instructions for running pathway predictions.Click here for fileDescription of all publically available datasets used.Click here for fileAdditional external in silico validation graphs for the EZH2 signature using publicallyavailable data.Click here for fileGene lists for the five epigenetic pathway signatures.Click here for fileGraph showing the lack of correlation between each of the 5 epigenetic pathway signatres and proliferation, as measured by doubling time, in a panel of breast cancer cell lines.Click here for fileEpigenetic pathway predictions for HDAC1, DNMT2, and SIRT1 in (A) GSK and (B) CCLE cell line collections.Click here for fileTable of correlation coefficients for the 5 epigenetic pathway signatures within individual cancer types.Click here for fileGraph of EZH2 and HDAC4 activation in samples obtained from autopsies on the brains of people with Parkinson\u2019s, showing a frequency of coactivation not seen in any dataset of samples from living people.Click here for fileCopy number variations in the TCGA (A) breast cancer and (B) glioblastoma samples. Each point on the x-axis is a gene. The y-axis gives the average of the log2 ratio of tumor to normal copy number.Click here for file"} +{"text": "The coverage formed by reads mapped uniquely, after artifact filtering How accessible is the region, determined by the fold-enrichment of the DHS and (2) How protected is the sequence where a transcription factor is binding (depth of the footprint). Therefore, the utilization of a ChIP-seq peak finder does not completely fit the patterns formed in a DNase-seq assay. However, due to the lack of well-established algorithms to handle DNase-seq data, popular ChIP-seq peak finders are used instead to pinpoint DHSs of the regions of interest, both for footprints or DHSs, the combination with other genomic data sources can unravel a plethora of novel biological insights. DHSs have positive correlation with active histone marks, whereas the correlation is negative for repressive histone marks, and DHSs score is higher for active genes than for silent ones for each TF, which makes DNase-seq, if we consider the current state-of-the-art, a complementary tool of ChIP-seq rather than an independent assay to determine TF-binding sites genome-wide.Apart from the usual structural annotation and downstream analysis (including enrichment of known motifs or The correlation between gene expression and active and repressive histone marks have revealed four distinct modes of chromatin structure in humans, further invalidating the simplistic assumption that chromatin can only be in an \u201copen\u201d or \u201cclosed\u201d conformation (Shu et al., New open questions should redirect the efforts to adapt each methodology to fruitfully map chromatin accessibility by DNase-seq, from the former stages of getting significant broad DNase I hypersensitive regions or narrow footprints, to the latter steps that include the differential assessment of chromatin accessibility changes and the correlation with other available genomic data. The question whether DNase-seq will eventually serve as a substitute for ChIP-seq, and to what extent, will be unraveled in the upcoming years."} +{"text": "To study the utility of Computed Tomography (CT) in patients with Minor Head Injury (MHI) with respect to certain clinical findings.This descriptive, observational study was conducted at JIPMER, Puducherry, India. All cases of Minor Head Injury (MHI) with a Glasgow Coma Scale (GCS) score of \u226513 who attended the Emergency Department (ED) during the period of 9th September to 30th September were included and the results were analysed using SPSS version 16.Of the 132 cases referred for CT brain, 109 had a GCS score of less than 13 on initial evaluation. Among 109 cases with MHI, 90 were males and 64 (58.7%) were in the age group of 14-44 years. 78 cases were Road Traffic Accident victims, 17 were assaulted and 14 had history of fall. Twenty six (23.9%) had abnormal CT findings. Skull fracture was the commonest finding , followed by contusion and haemorrhage (EDH/SDH/ICH) . The logistic regression analysis showed that Loss of Consciousness (LOC) or amnesia (p =0.045) and female sex (p =0.048) were associated with abnormal CT findings.Single centre study, lack of assessment of all associated variables and limited sample size were the main limitations of the study.A higher proportion of abnormal CT scans related to trauma after minor head injury in this study highlights the need for promotion of safety measures in such risk groups. Abnormal CT scans related to trauma after MHI can be predicted by the presence of certain risk factors for the same."} +{"text": "According to the fear-then-relief technique of social influence, people who experience anxiety whose source is abruptly withdrawn usually respond positively to various requests and commands addressed to them. This effect is usually explained by the fact that fear invokes a specific program of action, and that when the source of this emotion is suddenly and unexpectedly removed, the program is no longer operative, but the person has not yet invoked a new program. This specific state of disorientation makes compliance more likely. In this paper, an alternative explanation of the fear-then-relief effect is offered. It is assumed that the rapid change of emotions is associated with feelings of uncertainty and confusion. The positive response to the request is a form of coping with uncertainty. In line with this reasoning, while individuals with a high need for closure (NFC) should comply with a request after a fear-then-relief situation, low NFC individuals who are less threatened by uncertainty should not. This assumption was confirmed in the experiment. The literature on persuasion and compliance provides descriptions of various procedures increasing the likelihood of compliance see: . In suchIn the present paper, however, we would like to offer another cognitive-motivational framework for the fear-then-relief phenomenon, which is complementary rather than competitive to that presented above. It is based on the assumption that the rapid change of emotions is connected with feelings of uncertainty and confusion. The positive response to the request is a form of coping with uncertainty see , 2009. IAccording to Kruglanski\u2019s Lay epistemology theory , NFC is When applying the explanation of NFC to the fear-then-relief phenomenon, it can be suggested that compliance with the request frees individuals from the need to validate how they want to respond to the request. This in turn extends the uncertainty experienced in the situation. Therefore it is expected that participants\u2019 NFC levels would moderate the effect of fear-then-relief. While individuals with high NFC will comply with a request after a fear-then-relief situation, low-NFC individuals who are less threatened by uncertainty will not.M = 22.95; SD = 6.63) were randomly assigned to one of the two conditions: control and experimental (fear-then-relief).A group of 120 university students and then stated that it was not they who had lost the wallet. The confederate thus continued: \u201cOh, I\u2019ll have to take it to the University Information Desk then\u2026 but as we are already talking\u2026 I am a member of a students\u2019 committee organizing the celebrations of the 20th anniversary of our University. Would you agree to help us organize the events?\u201d If the answer was positive, the confederate continued: \u201chow many hours of activity do you declare, more or less?\u201dTwo of the participants did not respond in accordance with the assumed scenario, and in response to the confederate\u2019s question \u201cHaven\u2019t you lost your wallet?\u201d calmly responded that they had not, without betraying any sign of disquiet. In addition, during the post-experiment debriefing two other individuals (men) declared their suspicions that the episode with the wallet was not an accident, but rather an element in the experiment. The results of all these participants were excluded and four additional individuals were included in the study.In the control condition the confederate entered the room, excused herself for causing the interruption and asked the participant for assistance in organizing events during the anniversary celebrations.The participant\u2019s consent to help in the organization of the anniversary celebrations and the number of hours of activity declared were treated as dependent variables.At the very beginning of the experiment, just after arriving at the laboratory, study participants signed a declaration stating that they expressed their informed consent to participation in a psychological experiment which would measure certain individual character traits, as well as reactions to various events . Immediately after the experiment, each study participant went through a thorough debriefing process. None of them expressed any reservations concerning the course of the experiment. The design and the experiment conditions for the study were approved by the local University of Social Science and Humanities Ethics Committee in accordance with the Helsinki Declaration.Table 1 shows that in the first step only the manipulation had a significant effect on compliance. The interaction in the second step was of marginal significance. In addition, we performed a regression analysis with the same independent variables on the second dependent variable (number of hours). Table 2 shows that in the first step only the manipulation (fear-then-relief vs. control condition) achieved significance. In the second step, the interaction term contributed significantly to the explanation of the dependent variable. To further explore the source of the interaction two simple slopes were calculated for high and low NFC individuals (one SD above and below the mean). The results show that while for low-NFC individuals the regression coefficient is not significant , it is significant for high-NFC individuals . Thus, the research hypothesis is confirmed. The effect of fear-then-relief on low-NFC individuals was lower than on high-NFC individuals. In fact, the effect of fear-then-relief exists only in the case of high-NFC individuals.To test the hypothesis on compliance, we performed a logistic regression in two steps. In the first step, we introduced the two independent variables (NFC and manipulation), and in the second step the interaction between the two variables. In our experiment, we demonstrated the existence of individual differences in susceptibility to the \u201cfear-then-relief\u201d technique. In respect of compliance measured on an interval scale , the technique was successful in respect of individuals characterized by a high need for cognitive closure, while ineffective toward those with a low need for cognitive closure. This effect expands knowledge on the efficacy of social influence techniques, and particularly that of fear-then-relief. It also constitutes a complementary theoretical interpretation of the effectiveness of that technique compared to the \u201cclassic\u201d one which assumes that the feeling of sudden relief induces a state of mindlessness . The resFrom the perspective of the practical application of fear-then-relief, we may posit the careful hypothesis that it should be particularly successful in situations involving conditions of uncertainty and confusion . It is also worth observing that practitioners of social influence usually do not apply individual techniques, but a chain of various methods, and they modify their selection of persuasive strategies based on the development of the situation e.g., . It woulA few limitations associated with our empirical determinations should be emphasized. Firstly, the conclusion on the role played by need for cognitive closure in compliance with fear-then-relief is based on one study. Secondly, in our study need for cognition was treated as a personality factor. It is not certain whether an analogical structure of results would be achieved by treating need for cognitive closure as a factor of a situational nature. Thirdly, the role of need for cognitive closure was only observed when compliance with the request was treated as an interval variable (number of declared hours of work for the University). For the dichotomous variable (do you agree to participate in organizing a celebration for the University\u2019s 20th anniversary?) only the main fear-then-relief effect was clearly noted, and the interaction of this factor with NFC did not achieve statistical significance.Although our results are congruent with the assumption that uncertainty is particularly aversive for people characterized by high need for cognitive closure, and this is precisely why they are susceptible to the fear-then-relief technique, we did not directly examine the mechanism itself. Future experiments should thus aim at exploring the mediational role of experiencing uncertainty. In other studies devoted to the role played by cognitive structuring processes in the fear-then-relief technique, we plan to examine whether an analogical pattern of dependencies is obtained when we manipulate the NFC (rather than measure it with a survey as in the present study). We also intend to examine the role played by another element of cognitive structuring, namely, ability to achieve cognitive structure. From the theoretical perspective, we may expect that differentiated ability should also lead to differentiated reactions to the request that appears when the individual first experiences an unexpected fear, and then unexpected relief (and thus in an unclear and unexpected situation).At the very beginning of the experiment, just after arriving at the laboratory, study participants signed a declaration stating that they expressed their informed consent to participation in a psychological experiment which would measure certain individual character traits, as well as reactions to various events . Immediately after the experiment, each study participant went through a thorough debriefing process. None of them expressed any reservations concerning the course of the experiment. The design and the experiment conditions for the study were approved by the local University of Social Science and Humanities Ethics Committee in accordance with the Helsinki Declaration.DD \u2013 general idea of the research, design of the experiment, and manuscript preparation. BD \u2013 general idea of the research and manuscript preparation. YB-T \u2013 general idea of the research, data computing, and manuscript preparation.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Not only explicit but also implicit memory has considerable influence on our daily life. However, it is still unclear whether explicit and implicit memories are sensitive to individual differences. Here, we investigated how individual perception style correlates with implicit and explicit memory. As a result, we found that not explicit but implicit memory was affected by the perception style: local perception style people more greatly used implicit memory than global perception style people. These results help us to make the new effective application adapting to individual perception style and understand some clinical symptoms such as autistic spectrum disorder. Furthermore, this finding might give us new insight of memory involving consciousness and unconsciousness as well as relationship between implicit/explicit memory and individual perception style. Our physical and mental activities are crucially influenced by the past experiences recorded in memory. Generally, there are two types of memories, explicit and implicit memories. Explicit memory refers to the memory that involves conscious recollection of information, on the other hand, implicit memory does not depend on conscious recollection \u20133. Not oIt is well known that our perception styles are different with each person. For instance, Bouvet et al. used a Navon figure , a largePsychologists suggest that memory involves three essential aspects of information processing, encoding, storage, and retrieval . EncodinHere, we examined how individual perception style correlates with implicit and explicit memory. To investigate that, we conducted a series of psychological experiments, with two types of memory tests and a Navon task .We first did the memory test for the estimation of implicit or explicit memory, in which participants viewed the video consisting of 17 short videos and answered the questions. In the implicit memory test, we used a free association task for estimating the status of implicit memory: participants were asked to answer an associable word to given words . In the Participants viewed 17 videos of scenery from a car window: We shot those videos in the city streets of Kanagawa prefecture Japan. The average of video length was 31 \u00b1 4.3 secs. Participants were not told that they would take a memory test after watching those videos. The 17 videos were randomly and continuously run for each participant . After viewing those videos, they took the implicit memory test in which participants were asked to tell about an associated word to the given words (e.g. They answered \u201cPeach\u201d to ) with their writings in Japanese . We creaIn order to check the possibility that participant\u2019s answers were from their explicit memory, all participants were asked whether or not they were aware of some sorts of association between the given words and the contents of the videos at the end of the all experiment . Also, wParticipants viewed 17 videos used in the implicit memory experiment, and the experimental procedures were the same as the implicit memory experiment, except that they took the explicit memory test in which participants were asked whether they recognized the words as words/objects in the videos (e.g. They answered \u201cI do/do not remember this\u201d to ). Thirty given words were presented words/objects in the videos, and those words were chosen based on the Target answers in the implicit test, because we would like to assess the same words/objects of implicit and explicit memory as much as possible ,22 also Tables. Each trial began with a presentation of \u201cBig\u201d or \u201cSmall\u201d in Japanese for 1000 msecs. Then, the fixation cross was presented at the center of the screen for 1000 msecs. After the blank for 1000 msecs, a Navon figure was presented for 250 msecs. When \u201cBig\u201d was presented at the beginning of the trial , participants were required to answer the large letter in the Navon figure from the three options , becauseWe calculated the sample size using G*Powers based onEighteen participants were assigned to the implicit memory experiment group. Another eighteen participants took the explicit memory experiment. Participants were randomly assigned to the explicit or implicit memory tests. All participants signed the letter of consent approved by the Ethics Committee of the Tokyo Denki University in compliance with Declaration of Helsinki.First of all, from results of the Navon task, we found 4 participants who had 0 for their perception style scores (3 in implicit group and 1 in explicit group.). Therefore, we checked the detailed performance of the Navon task about those four participants. The results show that the mean of the reaction time for each participant was within one standard deviation of the average for each group. In other words, those participants were not significantly slow in responding to ensure accuracy and there was not a speed-accuracy tradeoff. Thus, we concluded that 0 index represents just lack of a strong perceptual bias . On top As a result, significant correlation was found between the performance of the implicit memory test and the index of perception style . The perr = 0.10, p = 0.42). Therefore, containing the responses for Foil items into the performance of the explicit memory test did not crucially affect the present main findings. Additionally, one might think that we should use A-prime values for evaluating results of the explicit memory test because of dealing with the response bias [r = 0.21, p = 0.39, also see In case that including the responses for Foil items (thirty given words that were not presented in the videos) for analysis made the current results of explicit memory test, we also analyzed the responses for only presented words/objects in the videos and calculated the correlation between the performance and the index of perception style. As a result, we did not find any correlation between them .The mean A-prime was 0.79\u00b10.04 ((TIF)Click here for additional data file.S2 Fig(TIF)Click here for additional data file.S1 Table(TIF)Click here for additional data file.S2 Table(TIF)Click here for additional data file.S3 TableWrong level error means that participants selected the large letter when in the local condition or the small letter when in the global condition. Non-present level error means that participants chose the letter that was not presented in the figure (e.g. \u201cS\u201d in the case of (TIF)Click here for additional data file."} +{"text": "Health care providers are often ill prepared to interact about or make acceptable conclusions on complementary and alternative medicine (CAM) despite its widespread use. We explored the knowledge, attitudes, and practices of health care providers regarding CAM.This cross-sectional study was conducted between March 1 and July 31, 2015 among health care providers working mainly in the public sector in Trinidad and Tobago. A 34-item questionnaire was distributed and used for data collection. Questionnaire data were analysed using inferential and binary logistic regression models.Response rate was 60.3% (362/600). Responders were 172 nurses, 77 doctors, 30 pharmacists, and 83 other health care providers of unnamed categories (mainly nursing assistants). Responders were predominantly female (69.1%), Indo-Trinidadian (55.8%), Christian (47.5%), self-claimed \u201cvery religious\u201d (48.3%), and had <5\u00a0years of working experience (40.6%). The prevalence of CAM use was 92.4% for nurses, 64.9% for doctors, 83.3% for pharmacists, and 77.1% for other health care providers. The majority (50\u201375%) reported fair knowledge of herbal, spiritual, alternative, and physical types of CAM, but had no knowledge of energy therapy and therapeutic methods. Sex, ethnicity, and type of health care provider were associated with both personal use and recommendation for the use of CAM. Predictors of CAM use were sex, religion, and type of health care provider; predictors of recommendation for the use of CAM were sex and type of health care provider. About half of health care providers (51.4%) and doctors (52%) were likely to ask their patients about CAM and <15% were likely to refer patients to a CAM practitioner. However, health care providers expressed interest in being educated on the subject. Doctors (51.9%) and pharmacists (63.3%) said that combination therapy is superior to conventional medicine alone. Less than 10% said conventional medicine should be used alone.Knowledge about CAM is low among health care providers. The majority engages in using CAM but is reluctant to recommend it. Predictors of CAM use were sex, religion, and profession; predictors of recommendation for the use of CAM were sex and profession. Health care providers feel the future lies in integrative medicine. Advances in conventional health care have led to improved morbidity, mortality, and quality of life. However, complementary and alternative medicine (CAM) is still widely used across the globe. CAM is defined as \u201ca group of diverse medical and health care systems, practices, and products that are not generally considered part of conventional medicine\u201d . The gloCAM is of medical interest because of its perceived benefits . AccordiThis cross-sectional study was conducted between March 1, 2015 and July 31, 2015 among all doctors, nurses, pharmacists and other clinical staff of any ethnicity and gender working mainly in the public health sector of one of the five Regional Health Authorities in Trinidad and Tobago and general practitioners working in Trinidad. The inclusion criterion was consent to participate in the study. The data collection instrument was a 34-item self-administered questionnaire that included items related to demographics, personal and recommended use of various CAM therapies; knowledge, referral and recommendation, reasons and influences for prescribing CAM; attitudes and practices towards CAM usage in the present and future. The key demographic variables were sex, marital status, ethnicity, educational level, employment status, religion, religiosity, and type of HCP. The sample size of 600 was determined by methods described by Lwanga et al. and represents the minimum sample size required to estimate the percentage of target population who uses CAM with a 4% margin of error .Descriptive and inferential statistical analysis was performed using SPSS version 20 , Indo-Trinidadian , Christian religious affiliation , self-claimed \u201cvery religious and had been in practice for less than five years .A total of 600 questionnaires were distributed, and 362 were returned. The overall response rate was 60.3% Table\u00a0. RespondThe prevalence of CAM use was 92.4% 158/172) for nurses, 64.9% (50/77) for doctors, 83.3% (25/30) for pharmacists, and 77.1% (64/83) for other HCPs and recommend CAM the least 26%). On the other hand, a greater proportion of pharmacists initiated discussion on CAM (46.7%) and recommended CAM (50.0%) , nurses (43.0%), pharmacists (63.3%), and other HCPs (43.4%) said that combination therapy is superior to Conventional Medicine alone. HCPs believed that combination therapy increases patient satisfaction and assists in fighting illness Table\u00a0. The sup%, nursesIn this study, the overall prevalence of CAM use among HCPs was high (82.3%): nurses (92.4%), pharmacists (83.3%), other HCPs (82.3%), and doctors (64.9%). These are quite high rates considering HCP\u2019s training in evidence-based medicine. However, such prevalence rates are similar to those of other countries with folk medicine tradition: 100% of pharmacy students in Sierra Leone and 51% Despite the high HCP usage Fig.\u00a0, only 26Doctors particularly felt that recommendations should be based on evidence-based guidelines, as in other studies , 45. TheThe majority of doctors (84.4%), nurses (83.1%), and pharmacists (83.3%) felt that medical practitioners should be more educated on CAM. The desire to learn about CAM goes beyond curiosity and information but to acquire the knowledge to treat complications and drug interactions and to cOur study reveals that when confronted by CAM users on medical issues, the majority of doctors 50.6%) and nurses (52.6%) remain neutral or non-committal. This is despite the benefits and importance of communication between patients and doctors . This ma.6% and nAt least 81.8% of doctors, 82.6% of nurses, and 83.3% of pharmacists felt that CAM should be tried and scientifically tested before usage. A substantial percentage of doctors (44.2%) also felt it should be placed in a drug formulary. Most HCPs (doctors (61%), nurses (42.4%), pharmacists (53.3%), and other HCPs (67.1%)) believe that conventional medicine and evidenced based CAM should be integrated. Less than 35% of HCPs felt that the combination of treatment should be at the doctors or patients\u2019 discretion while a small but significant percentage of doctors (10.4%), nurses (3.5%), pharmacists (6.7%) and other HCPs (3.7%) felt CM should be used alone.The patients\u2019 perceived benefits of CAM in holistic care \u201353, longThis study has some limitations. First, the sample was not randomized; however, the questionnaires were distributed widely to nurses, doctors, and pharmacists across different departments and locations. Second, the sample was too small for subgroup analysis. Third, answers depended on memory recall, which may introduce bias. The sample, though comprising a group of experts, is influenced both from their heritage of centuries of traditional medicine from fore parents from Africa and India as well as modern day exposure to Chinese, Ayurvedic, and western (USA) and South American medicine. While the findings may be unique to Trinidad, patients\u2019 characteristics and CAM practices are similar to other countries and these findings can be generalised to other HCPs in other countries.The prevalence of CAM use among HCPs in Trinidad and Tobago was high (82.3%). CAM use was more prevalent among nurses, followed by pharmacists, doctors, and other HCPs. However, knowledge about CAM was low, particularly among doctors, and the majority was reluctant to recommend CAM and to refer patients to a CAM practitioner. Sex, ethnicity, and type of HCP were associated with both personal use and recommendation for the use of CAM. Predictors of CAM use were sex, religion, and profession; predictors of recommendation for the use of CAM were sex and profession. Pharmacists, followed by doctors, other HCPs, and nurses, feel combination therapy is superior to CM alone. HCPs, particularly doctors and nurses, feel the future lies in integrative medicine combining Conventional Medicine and evidence-based CAM. Only a small percentage of HCPs feel CM should be used alone. There was inadequate communication with HCPs, leaving patients largely unsupervised and unmonitored by medical personnel."} +{"text": "Ideally, bone adaptation after THA manifests minimally and local bone density reduction is widely avoided. Different design features may help to approximate initial, post-THA bone strain to levels pre-THA. Strain-shielding effects of different SP-CL stem design features are systematically analyzed and compared to CLS Spotorno and CORAIL using finite element models and physiological musculoskeletal loading conditions. All designs show substantial proximal strain-shielding: 50% reduced medial surface strain, 40\u201350% reduction at lateral surface, >120\u2009\u00b5m/m root mean square error (RMSE) compared to intact bone in Gruen zone 1 and >60\u2009\u00b5m/m RMSE in Gruen zones 2, 6, and 7. Geometrical changes have a considerable effect on strain-shielding; up to 20%. Combinations of reduced stem stiffness with larger proximal contact area lead to less strain-shielding compared to clinically established implant designs. We found that only the combination of a structurally flexible stem with anatomical curvature and grooves improves strain-shielding compared to other designs. The clinical implications One of the last remaining issues of THA, with more than half of the late implant revisions, is aseptic loosening: the detachment of implant from bone in the absence of infection. Survivorship concerning femoral revision for aseptic loosening as the end point amounts to 93% at 22 years2, but there is an especially reduced total survival rate of some uncemented implants below 70% after 15 years in patients younger than 50 years1. A possible explanation for loosening might be the normal physiological bone remodeling process, and this is well founded because THA causes proximal bone unloading which often leads to a bone adaptation response, leading to reduced bone density in the proximal femur. This issue impacts especially young patients3.Certain uncemented femoral stems (e.g. CLS Spotorno) in primary total hip arthroplasty (THA) show highly reliable outcomes with total implant survival around 94% and 86% after 10 and 22 years, respectively4. Short stem implants show adequate survival rates at medium-term follow-up5. However, such short stem implants may be prone to subsidence, sometimes not even reducing bone resorption or leading to periprosthetic fractures9. Stemless implants with more reduced strain-shielding did not become widely accepted due to the increased probability of problematic implantation which is much more delicate without a guiding stem within the intramedullary canal10. Thus, geometrical features that enable physiological load transfer in standard implants remain the most promising approach to reduce strain-shielding.Short stem or stemless implants have been proposed to reduce the stress-shielding16. The present study concentrates on the distinct influence of stem curvature for improved fit and fill, the existence of grooves/ribs for increased contact surface and their inter-variable connection (i.e. changing two variables together) on bone strain compared to the intact bone within well-controlled physiological bone models. We hypothesize that a more anatomical stem curvature in combination with stem grooves would lead to a reduction of strain-shielding effects.However, it is unclear how different stem features in detail account for the unloading or if they would even allow regaining physiological bone strains. Numerous possibilities for a more physiological load transfer have been proposed such as anatomical stem curvature, reduction of material stiffness, and reduction of the stem cross-sectional area and length18. This translates to element numbers of the stems ranging from 23,994 elements for a simple geometry to 142,192 for a complex surface geometry.Finite element (FE) models of an intact right femur of one representative female patient from a larger cohort who underwent THA in our clinic were created Fig.\u00a0. No pati20 as described before21. Pre-processing was performed using Abaqus CAE v6.12 to virtually remove the femoral head using a resection plane within the femoral neck , size 13, 135\u00b0 CCD, L\u2009=\u2009155\u2009mm CLS Spotorno size 9, 135\u00b0 CCD, L\u2009=\u2009150\u2009mm SP-CL size 10, 126\u00b0 CCD, L\u2009=\u2009160\u2009mm The CORAIL and the CLS Spotorno implant stems represent two clinically well-tried examples of the most successful implants, which have proven excellent long-term results in cementless hip arthroplastynts Fig.\u00a0 are:CORASP-CL without grooves (smooth surface)SP-CL straight (with mild grooves)SP-CL straight without grooves.Variations of the SP-CL that are not manufactured by the company and only exist for the purposes of this study:The SP-CL feature variations enable direct comparison of the effects of stem curvature (indirectly gap fit and fill), grooves , and the interplay of different features. The relative cross-sectional areas of the implants are compared in Table\u00a026; similar loads were used in finite element models before and are detailed there21. The main loading vector value and orientation at the femoral/implant head are shown in Fig.\u00a0Material modulus of elasticity for metal titanium components was set at 110\u2009GPa, Poisson ratio 0.3. The models were loaded with concentrated forces derived from a validated musculoskeletal model27 that constrain a node at the knee centre in three translational degrees of freedom (DOFs). Another node, where the hip contact force was applied, was constrained in two DOFs such that this node could only deflect along an axis towards the knee centre. The sixth DOF was constrained at a node on top of the distal lateral epicondyle. The subject-specific loading conditions and physiological boundary constraints are essential because they lead to inter-patient variations in bone adaptation patterns28. At the bone-implant interface, initial tangential sliding and friction with a coefficient of 0.5 was considered until contact occurred. Then a normal, uniform penalty contact was modeled without separation (bony on-growth) where contact to bone was expected (i.e. where compact bone neighbors the implant or up to the end of grooves). While the contact interaction properties were the same for all implants as we expected bony on-growth, the contact surface varied based on the extent of grooves or coated areas. With the given model geometries, the stem tips simply float in the canal, so that at the distal implant tip contact was neglected.Empirically realistic musculoskeletal boundary conditions were implementedAfter quasi-static analyses, maximum principal strains at the lateral surface and minimum principal strains at the medial surface and along internal paths were evaluated using Abaqus CAE v6.12 for post-processing. The strain values are sensitive to the exact measurement position femoral head deflection and deflection at the mid-shaft as measured in vivo in a radiological study directly from 2D-X-rays during one legged stance30.Model validation was performed by: (1) comparison of predicted strains Fig.\u00a0 to in vi29. Deflection of the femoral head was 0.93\u20131.09\u2009mm and 1.72\u20131.91\u2009mm at the femoral shaft for the intact femur, which is consistent with published results30.Predicted strains in the proximal lateral region of the intact femur were similar to those measured experimentally All stem designs generally showed similar qualitative proximal strain-shielding at the bone surface (reduced strain compared to intact), between 40\u201350% laterally and in parts higher than 50% medially. More distally, strain-shielding is minimal and even an overstraining can be observed at the distal third of the implants Fig.\u00a0. The difThe load transfer from the stem to the bone is not homogeneous Figs\u00a0 and 8. EUsing the SP-CL implant as a template, the effects of individual changes to stem design are shown in Fig.\u00a0This study uses a novel THA stem (SP-CL) as an example to illustrate the effect of various design features on the amount of strain-shielding in the femur. The analyses set out to investigate the distinct influence of the stem design features and their inter-variable connection on bone strain compared to the intact bone within a physiological model. Our results show that a novel implant incorporating anatomical shape and grooves (SP-CL) leads to less strain-shielding than well-established total hip replacements (CORAIL and CLS Spotorno).et al.31 reported peri-prosthetic bone loss 6 months postoperatively in the proximal and middle zones for different femoral stem designs . Bone strain reductions with a CLS Spotorno stem have been found in experimental studies in the proximal femur with a mean difference on the medial side at the calcar of \u221265%, laterally \u221272%, and more distally up to \u221224% when compared to the intact femur, while the very distal strain showed only minor changes32. Such experimental in vitro studies, where a 50\u201364% drop in proximal surface strain, and slight distal changes of 4\u201314% of strain increase were recorded33, agree qualitatively with clinically observed volumetric bone density changes after THA. Szwedowski, et al.28 report for 12 months post-THA clinically measured BMD changes of \u22129.2 to \u221217.2% in Gruen zone 1, \u221215.9 to \u221233.6% in Gruen zone 7, and \u221212.9 to +4.9% in the more distal Gruen zones 3\u20135 for 3 patients with an uncemented Zimmer Alloclassic stem. Those values of BMD changes generally agree with our results, but the exact values for experimentally measured strain-shielding vary in different studies due to inconsistent loading and measurement sensor positioning34. With our approach, a consistent measurement under controlled physiologic-like boundary conditions is possible at exactly corresponding positions to the up to 10% of additional implant survival after 2 decades, with 86% survival for CLS Spotorno after 22 years2 versus 96% for CORAIL after 23 years24.The CLS Spotorno showed proximally Figs\u00a0 and 8 abSurprisingly, individual features such as straight against curved design, ribs, and small and larger grooves only showed a mild influence on the strain deviation to intact Figs\u00a0\u20136 and aret al.36 report that a rectangular straight stem led to a reduction of strains below the calcar \u221273%, and below the greater trochanter \u221261% while a (mostly smooth) curved stem led to a reduction of major principal strains \u221243% below the calcar and \u221269% below the greater trochanter. In our models, the combination of anatomically curvature and grooves (larger surface area) showed an equivalent strain-shielding or even slight improvement of load transfer proximally lateral compared to the already successful straight stem designs and a gap-filling stem based on cross-sectional CT scans. They found that principal compressive strain at the calcar was reduced by 90% for an anatomical stem and 67% for a gap-filling stem, while medially, at the level of the lesser trochanter, the corresponding figures were 59% and 21%. This underlines the importance of geometric match or sufficient contact area between femoral canal and stem even when anatomic stems are used. The exact long-term consequences of this mildly, but distinctly enhanced stimulation with a stem that fits and features large-area-contact to the bone cannot be precisely assessed with the current methods. A remodeling algorithm may extrapolate the initial strain differences to future density changes. However, differences of BMD changes of different stems suggest that a magnitude of about 10% BMD change after 3 years could be realistic31. The local differences in strain-shielding between well-established stem designs represent the same magnitude as the locally improved deviation to intact for SP-CL . The conformity of the bone at the implant-bone interface and an extensive contact area have been neglected so far. However, those aspects of fit and fill are gaining more attention41. Inaba, et al.42 report lower BMD one year post-THA in Gruen zones 6 and 7 for a Zweym\u00fcller-type stem compared to a fit-and-fill-type stem. In Gruen zone 1, the fit-and-fill-type stem group showed a continuous decrease in BMD and the Zweym\u00fcller-type stem group showed a decrease in BMD up to 6 months after surgery and then showed an increase 12 months after surgery, which highlights the influence of later remodeling. Especially curved stems show improved fill which has to be considered alongside the well-regarded implant stiffness to achieve a more mechano-biologically adapted load transfer. The geometrical mismatch between the femoral canal and cementless implants should be met using more physiological stem designs that recreate the internal femoral shape, which has been shown to be more important than minimal cross-sectional area for physiological strain pattern and reduced stress shielding37. Especially the adaptation of stem choice to bone canal size and shape is important as a large tapered wedge-type stem and stovepipe femur may be associated with significant proximal BMD loss43. Undersized stems and stems in hips with cup revision were at higher risk for aseptic loosening with a hazard ratio of 4.2 and 4.3 respectively2. Inappropriate load transfer from implant to bone or inadequate internal load caused by excessive mal-positioning or inapt implant design may cause this. The surgical access and according iatrogenic muscle trauma seem to play a lower-ranking role3.Using a shape optimization scheme based on a straight stem and varying the cross-sections, the proximally resorbed volume could not be reduced further than \u221223%44. This is confirmed by computational results suggesting that the strain distribution in the femur may be similar at different stages of healing after THA, regardless of small alterations in implant positioning. However, the healing immediately after surgery will be affected differently because the sensitivity of micro-motion is characteristic of the implant geometry45 and implant placement46. The sensitivity of stem design especially to initial micro-motion (vulnerability of on-growth) will have to be considered in future studies. In this study, we evaluated only implant geometry, however femur variability, especially Dorr femoral bone classification48, i.e. canal stovepipe shape versus champagne flute shape, should be included in future models of THA. Considering the range of anatomical parameters makes it possible to generalize or stratify the results to the entire population45. However, rather (intramedullary) femur shape than pure size seems to play the dominant role for strain-shielding49. The scope of this study was to test the influence of stem design features on bone strain. For consistency and control, we considered only one individual patient-specific geometry and its associated material distribution. However, in future studies markedly different geometries/properties are needed to see if these results about implant performance hold true across varying patient types. In our modeling approach, we did not realize a compaction which may locally condense bone36 and thus lead to a slightly different initial local strain, which might be especially relevant for the CORAIL with its special stepped surface and compaction broaching approach. We did not specifically validate the implant-bone interface behavior, quantify possible wear or consider particle-induced inflammation here as we assumed uneventful bony on-growth. We did not model any initial pressure between bone and implant or viscoelastic behavior. Further experimental measurements are required to validate the results of the FE model.Small changes in stem placement would likely have little influence on the internal loading of the femur after bony on-growth has been achieved and thus small positioning errors result in generally small strain differences when compared to the overall change from the intact femur strainThis study indicates that small changes in geometry of uncemented stems can change strain-shielding considerably up to 20% locally. Combinations of moderately low stem stiffness (slender titanium stem with deep grooves) and a large proximal contact area lead to reduced strain-shielding, estimated through finite element analyses.Insights to long-term effects of the improved strain-shielding on bone mass can be gained by clinical studies and may eventually be explained by mathematical remodeling analyses. Both are parts of our ongoing research activities."} +{"text": "The mini-slump test is a fast, inexpensive and widely adopted method for evaluating the workability of fresh cementitious pastes. However, this method lacks a standardised procedure for its experimental implementation, which is crucial to guarantee reproducibility and reliability of the test results. This study investigates and proposes a guideline procedure for mini-slump testing, focusing on the influence of key experimental (mixing and testing) parameters on the statistical performance of the results. The importance of preparation of always testing at the same time after mixing, testing each batch once rather than conducting multiple tests on a single batch of material, is highlighted. A set of alkali-activated fly ash-slag pastes, spanning from 1 to 75\u00a0Pa yield stresses, were used to validate the test method, by comparison of calculated yield stresses with the results obtained using a conventional vane viscometer. The proposed experimental procedure for mini-slump testing produces highly reproducible results, and the yield stress calculated from mini-slump values correlate very well with those measured by viscometer, in the case of fresh paste of pure shear flow. Mini-slump testing is a reliable method that can be utilised for the assessment of workability of cements. The yield stress of a cementitious material denotes the critical stress value at which the material will begin to, or cease to, flow, which is an important property when placing the material. Concrete with high yield stress is difficult to pump, and the associated poor workability results in quality control issues in the hardened material [For a given cementitious material in the fresh state, the yield stress is applied to the material starting from rest, and the (static) yield stress is identified as the maximum in the stress-time profile [Rheometers are increasingly adopted to determine the yield stress of cement paste or concrete, and there are two methods commonly applied using such instruments to measure yield stress . The fir profile , 5. Othe profile .However, measuring yield stress using a rheometer is sometimes difficult due to challenges in the inherent nature of the yield stress material and the proper selection of a rheological model. Problems such as slip flow , fracturIn a typical slump test, a mould of a given conical shape is filled with the material to be tested ; variousHowever, it is often far from convenient to produce and manipulate the large quantities of material required for a full-scale concrete slump test in a laboratory context, as much research work is conducted with paste or mortar specimens. In order to resolve this issue, the so-called mini-slump test, which is essentially a down-scaled slump test , is hereThe mini-slump method is a simple, inexpensive and fast test to study the rheology of cement paste, if it can be shown to be applicable and reproducible. Mini-slump test results have previously been quantitatively linked with yield stress values obtained from theory and numerical modelling \u201319, showGround granulated blast-furnace slag and fly ash were used as precursors to prepare the alkali activated cement pastes. Table\u00a02O, 29.4\u00a0wt% SiO2 and 55.9\u00a0wt% H2O; the modulus of the activating solution (Ms\u00a0=\u00a0SiO2/Na2O molar ratio) was adjusted by mixing the commercial solution with analytical grade solid NaOH to obtain a sodium metasilicate (Ms\u00a0=\u00a01) solution. The adjusted metasilicate solution was stirred for 2\u00a0h, and allowed to cool to ambient temperature, after addition of the NaOH pellets. The solution was used on the same day as prepared, to avoid precipitation of solid sodium metasilicate hydrates. The complete 27 formulations are summarised in Table\u00a0A total of 27 paste formulations were studied. The total water to binder (where \u2018binder\u2019 is defined as precursor\u00a0+\u00a0solid component of the activator) ratios were 0.40, 0.44 and 0.48, with activator doses of 4, 8 and 12% relative to the mass of precursor (dry solids basis). As solid precursors, slag and fly ash were blended at different levels to yield pastes with diverse rheological properties. The alkaline activator was based on a commercial sodium silicate solution , composed of 14.7\u00a0wt% NaFor blended pastes containing both slag and fly ash, 30\u00a0min of pre-mixing of the dry powders was performed prior to mixing with the activating solution. For each mini-slump test, 60\u00a0g of precursor powder was combined with the specified amount of distilled water and activator in a plastic cylindrical container (diameter 5\u00a0cm and height 10\u00a0cm). The paste was then mixed using a high shear mixer as shown in Fig.\u00a02. Freshly mixed paste was immediately poured into the cone, and then the cone was lifted as slowly as possible to mini\u22121 over 60\u00a0s, then held for 5\u00a0s, then linearly ramped down from 100\u00a0s\u22121 to 0 over another 60\u00a0s, as illustrated in Fig.\u00a0\u03c40.To compare the yield stress values calculated from mini-slump testing with that determined using a viscometer, a HAAKE Viscotester 550 instrument was employed to directly measure the yield stress of fresh pastes, using a six-blade vane-in-cup geometry. During the test, the shear rate was linearly ramped up from 0 to 100\u00a0sk and n are fitting constants determined by regression of the shear rate-shear stress curve; the Bingham model is a special case of the Herschel\u2013Bulkley model with n\u00a0=\u00a01.In Eq.\u00a0This method was found to be more reliable and reproducible than the direct measurement of yield stress using a constant low shear rate, particularly for pastes of low yield stress, considering the sensitivity and capabilities of the viscometer used here.Due to the somewhat larger volume of paste required for the viscometer to achieve pseudo-infinite medium conditions, the pastes to be tested by this method needed to be prepared in a slightly different manner from those used in the mini-slump testing. For each batch to be tested using the viscometer, 270\u00a0g of precursor powder was combined with the required amount of distilled water and activator in a 500\u00a0mL plastic beaker. The paste was then mixed using the high shear mixer [\u03c40) of each paste was calculated according to Eq.\u00a0\u03c1, volume of the mini-slump cone \u2126, and the mini-slump spread diameter R.With the exception of the paste made with 100% slag, at spread) . So, theg is the acceleration due to gravity (9.81\u00a0m/s2).The density of each fresh paste was calculated based on the mix formulation and density of each component, i.e. slag, fly ash, activator solution and water; To determine the accuracy of the yield stress values calculated from mini-slump testing of these alkali-activated pastes, the yield stress was also measured using a viscometer, as described in Sect.\u00a0However, it is worth noting that yield stress values measured via the mini-slump test are systematically slightly higher than those obtained from the viscometer. This could be attributed to the fact that the effect of the surface tension of the paste was not included in Eq.\u00a0w/b 0.40 and 4% activator dose is 74.6\u00a0Pa, which is much higher than that determined by viscometry for the same paste, 36.4\u00a0Pa. The latter value is expected to be more accurate and reliable as the viscometer performs very well in this range, and the shear rate-shear stress curve measured by the viscometer , the determination of yield stress by mini-slump testing via Eq.\u00a0Figure\u00a0The reproducible measurement of paste spread size in the mini-slump test can also lead to relatively low scatter in the yield stress calculated by Eq.\u00a0Although many aspects of the experimental protocol may affect the statistical performance of mini-slump test results, Pashias et al. reportedHowever, the time taken to measure the outcome of the test does have an influence for the study of cementitious materials. The paste studied in the work of Pashias et al. was red w/b\u00a0=\u00a00.40 and 4% activator dose) measured immediately after mixing, and with 5 and 10\u00a0min delays before measurement. The dramatic change in the pat shape during this short timeframe demonstrates the importance of the structural evolution of the paste even in the first minutes after mixing, which may also include some loss of water from the surface due to drying effects. The images in Fig.\u00a0To illustrate the importance of the use of separate paste batches in obtaining reproducible measurements in the current study, Fig.\u00a0It is also worthwhile to note that the rapid removal of the mould from the paste may introduce additional inertial effects to the final spread. The stress which can be caused by inertial effects is in the range of several Pa, which becomes significant in the case of pastes with very low yield stresses , e.g.\u00a0<\u00a0In this study, the aspect ratio of the mini-slump test cone is 1.5, which is in accordance with the Abrams cone geometry widely used in the field of construction materials, but is higher than the aspect ratio recommended by Pashias et al. , which ww/b\u00a0=\u00a00.40, and activator dose 12%, mixed by hand (2\u00a0min) and using a high shear mixer (400\u00a0rpm for 2\u00a0min). The spread diameter after hand mixing is 126.1\u00a0\u00b1\u00a04.2\u00a0mm, while high shear mixing gave 120.9\u00a0\u00b1\u00a01.6\u00a0mm, showing that both the spread value and reproducibility of the tests were influenced by the choice of mixing method.To investigate the influence of the mixing protocol on the results of mini-slump testing, different possible mixing schemes were studied. The first comparison is the difference between hand mixing and high shear mixing applied to the paste. Figure\u00a0w/b 0.40 and activator dose 12%) after mini-slump testing, where the pastes were mixed by hand and by the high shear mixer, respectively. The shape obtained following hand mixing appeared obviously less circular than the one mixed at high shear, demonstrating the particles in the paste mixed by hand was not evenly dispersed due to the insufficient mixing intensity. This loss of pat circularity for hand-mixed pastes was observed consistently across multiple tests.These experimental results demonstrate that mixing by hand was not sufficient to disperse the precursor particles in the aqueous environment (water\u00a0+\u00a0activator solution) used here. The speed of (vigorous) hand mixing in this study was approximately 150\u00a0rpm, which is much lower than the 400\u00a0rpm generated by the high shear mixer. Apart from the lower speed, the paddle used in hand mixing was also much less efficient than the dedicated shear blade of the high shear mixer. The larger spread diameter resulting from hand mixing could be attributed to the insufficient mixing that is not strong enough to evenly disperse particles in the paste, allowing the liquid activator and water to flow unevenly (i.e. reach a larger spread diameter in the longest dimension) when performing mini-slump testing. The pictures in Fig.\u00a0Experimental work by Han and Ferron demonstrThe coefficient of variation of spread size of this paste mixed by hand (3.3%) is also much higher than that achieved with the high shear mixer (1.3%), indicating that high shear mixing produces more reproducible mini-slump test results than hand mixing, as the energy and mode of mixing energy input are better controlled using a mechanical mixer.w/b\u00a0=\u00a00.40 and activator dose 8%, mixed by a high shear mixer at different speeds, for different durations and with different batch sizes. The combination of these factors generated different mixing energy densities.Table\u00a0The first 109.4\u00a0mm) value in Table\u00a0\u00a0mm valueWhen the paste was mixed in a 270\u00a0g batch at 400\u00a0rpm for 2\u00a0min, a spread diameter of 109.0\u00a0mm was obtained. Since the mixing speed and time duration were as same as the baseline mixing scheme, each unit of paste in this larger batch received approximately 22% of the mixing energy of the 60\u00a0g batch size. However, the spread sizes of the two pastes are comparable, which means that a mixing speed of 400\u00a0rpm is also adequate to disperse particles in the paste with this formulation in the larger batch size. Doubling both the speed and mixing duration (final entry in Table\u00a0The setup and procedure proposed in this study for mini-slump test, i.e. mixing protocol, volume of paste, and the use of a fresh batch of sample for every replicate test, give highly reproducible spread diameters for pastes, which can then be used to calculate yield stress values.The yield stresses calculated based on the spread size from mini-slump test correlate well with the results from conventional viscometry when the paste shows pure shear flow.A controllable high shear mixer performs better than hand mixing, reducing the scatter of mini-slump test results, while the results are relatively robust to variations in batch size, mixing speed and duration.The study investigated the measurement of yield stress of cementitious pastes based on a reproducible mini-slump testing method using a well-defined mixing procedure. The relationship between the spread diameter in mini-slump testing and yield stress was verified with results obtained using a rotational viscometer; the correlation between the two methods is good for low yield stress values, but shows deviations at higher yield stresses when the flow patterns and resultant pat shape in the mini-slump test no longer correspond to the assumptions inherent in the derivation of the governing equation. The work in this paper thus contributes particularly to enabling inexpensive and rapid measurements of the rheology of cementitious pastes with low yield stresses which are difficult to directly evaluate using conventional viscometers. The main conclusions drawn are:The results obtained here were based on the analysis of a set of alkali-activated pastes which were designed to span a wide range of yield stress values, from 1.0\u00a0Pa to more than 50\u00a0Pa; the best results were obtained below 10\u00a0Pa which is a regime of significant interest in the study of both traditional and non-traditional cementitious paste systems."} +{"text": "Subsequently, the dialdehyde cellulose nanocrystal-silver nanoparticles (DCNC-AgNPs) were added to chitosan (CS) to form the wound dressings by solution casting method. The aim was to enhance the antibacterial effect of CS by incorporation of AgNPs and to improve the mechanical strength and hydrophobicity of CS by incorporation of DCNC that cross-linked by hydrogen bonds. The antibacterial activities were evaluated against five gram-negative bacteria, one gram-positive bacteria, and three fungi. The in vitro cytotoxicity assay was performed using the NIH3T3 cell lines by Sulforhodamine B assay. Research outputs signified that CS-DCNC-AgNPs possessed good mechanical strength and hydrophobicity, high antibacterial activity and less cytotoxicity. Our results propose that CS-DCNC-AgNPs can be a promising, safe antibacterial to be incorporated in wound dressings.The present work envisages a simple approach to synthesize a new wound dressing based on chitosan-dialdehyde cellulose nanocrystal-silver nanoparticles (CS-DCNC-AgNPs). Silver nanoparticles (AgNPs) were generated in-situ by periodate oxidation of cellulose nanocrystals to generate aldehyde functions, which were used to reduce Ag Bacterial infections accompanying the traumatic and surgical wounds and burns have been a major threat to human health, despite decades of advances in antibiotics. Bacterial resistance to antibiotics is a big challenge due to theit irrational and excessive use . Thus, r+ into Ag0 in mild alkaline conditions + complex to AgNPs (Ag0) that were loaded directly on the surface of DCNC. The hydrogen bonding between DCNC and CS undeniably improved the mechanical strength by crosslinking. On the other hand, the in-situ generated AgNPs significantly improved the antibacterial activity against gram-positive and gram-negative bacteria and fungi. Moreover, the cytotoxicity studies of CS-DCNC-AgNPs on NIH3T3 cells indicated that the generated conjugated complex was safe. Considering the previously mentioned merits, CS-DCNC-AgNPs seem to be a promising strategy for better antibacterial wound dressings, offering reduced toxicity and high mechanical strength. In the present work, we focused on the incorporation of AgNPs in DCNC by reducing the [Ag(NH"} +{"text": "The integration of gene expression data to predict systemic lupus erythematosus (SLE) disease activity is a significant challenge because of the high degree of heterogeneity among patients and study cohorts, especially those collected on different microarray platforms. Here we deployed machine learning approaches to integrate gene expression data from three SLE data sets and used it to classify patients as having active or inactive disease as characterized by standard clinical composite outcome measures. Both raw whole blood gene expression data and informative gene modules generated by Weighted Gene Co-expression Network Analysis from purified leukocyte populations were employed with various classification algorithms. Classifiers were evaluated by 10-fold cross-validation across three combined data sets or by training and testing in independent data sets, the latter of which amplified the effects of technical variation. A random forest classifier achieved a peak classification accuracy of 83 percent under 10-fold cross-validation, but its performance could be severely affected by technical variation among data sets. The use of gene modules rather than raw gene expression was more robust, achieving classification accuracies of approximately 70 percent regardless of how the training and testing sets were formed. Fine-tuning the algorithms and parameter sets may generate sufficient accuracy to be informative as a standalone estimate of disease activity. SLE is a complex, multisystem autoimmune disease that continues to be a major diagnostic as well as therapeutic challenge. There are no definitive, specific diagnostic tools available to determine whether a patient has SLE, and diagnostic approaches in SLE have not changed in decades. Physicians still rely on clinical evaluation and a few laboratory tests, including measurement of autoantibodies and complement levels. Despite the wealth of genetic, epigenetic, and gene expression data that has emerged in the past few years at both the patient and cellular levels, none has been integrated to produce a predictive tool that can be used to evaluate an individual SLE patient.2. Genome wide association studies (GWAS) have identified numerous polymorphisms in regions encoding genes or regulatory regions that could influence B cell function3, suggesting that a general state of B cell hyper-responsiveness could contribute to SLE pathogenesis. Autoantibody-containing immune complexes stimulate production of type 1 interferon, a hallmark of infection that is also observed in SLE patients, regardless of disease activity5. In addition to B cells and PCs6, various T cell populations also exert differential effects on SLE pathogenesis. T follicular helper cell subsets contribute to B cell activation and differentiation, and abnormal T cell receptor signaling is also thought to lead to hyper-responsive autoreactive T cell activity9. Furthermore, defects in regulatory T cells, partially secondary to deficient IL-2 production, result in faulty modulation of immune activity and inflammation9.In SLE, defects in central and peripheral tolerance allow for activation of self-reactive B cell clones and differentiation into plasmablasts/plasma cells (PCs) that secrete autoantibodies, which in turn mediate tissue damage10. Factors present in the local microenvironment can cause macrophages (M\u03d5) to undergo extreme changes in transcriptional regulation in a process called M\u03d5 polarization13. Overabundance of proinflammatory M1 M\u03d5 and decreased expression of markers for anti-inflammatory M2 M\u03d5 are detected in both lupus-prone mice and SLE patients15, and therapeutic stimulation of M2 polarization significantly decreases disease severity in murine SLE16. Experimental intervention in M2 polarization as well as microRNA array profiling suggest that abnormalities in M2 M\u03d5 may contribute to SLE severity17. Low-density granulocytes (LDGs) are abnormal neutrophil-like cells that appear in the blood of lupus patients as well as in many other disease states23. Although their involvement in SLE has not been studied as extensively as that of other cell types, LDGs have already been linked to kidney disease, vascular disease, and other manifestations in lupus patients29.Myeloid cells (MC) also play a role in SLE pathogenesiset al. reported a discrete group of differentially expressed genes that might be found in subjects with SLE renal disease28, and Banchereau et al. extensively analyzed pediatric lupus samples and attempted to associate modules of expressed genes with disease manifestations in children30. Despite these advances, gene expression data has yet to provide an approach with sufficient predictive value to utilize in decision making about individual subjects with SLE. Furthermore, no cellular phenotype has been independently verified to be able to distinguish a patient with active SLE from one with inactive disease. This distinction is critical both for patient evaluation and for clinical trials, as most SLE trials are aimed at controlling disease activity.To date, however, it has been difficult to relate gene expression profiles to SLE disease activity successfully. Numerous groups have attempted to characterize SLE patients by gene expression. For example, Jourde-Chiche 32. When applied to high-throughput transcriptomic data, machine learning algorithms could potentially be used to identify the gene expression features with the most utility to identify subjects with higher degrees of disease activity and may also provide insights into disease pathogenesis.Therefore, in order to advance personalized treatment of SLE patients, the use of big data analytical techniques, including machine learning, can be useful to understand the relationships between cell subsets, gene expression, and disease activity. Machine learning describes a wide range of computational methods which allow researchers to harness complex data and develop self-trained strategies to predict the characteristics of new samples, such as whether a given SLE patient has active or inactive disease. Machine learning techniques have already been leveraged in lupus to characterize disease risk and identify new biomarkers based on genotypic data or urine testsTo address this possibility, we used conventional bioinformatics methods in conjunction with unsupervised and supervised machine learning techniques to: (1) test the potential of raw gene expression data and modules of genes to classify subjects with active and inactive SLE, (2) determine the optimum classifier or classifiers, and (3) understand the combinations of variables that best facilitate classification.Before employing machine learning techniques, it was necessary to first assess whether conventional bioinformatics approaches could accurately separate active SLE patient samples from those obtained from inactive patients. First, three whole blood (WB) data sets Table\u00a0 were fil33. All genes that were tested for differential expression were sorted by FDR from most significantly overexpressed to most significantly underexpressed and broken into 36 groups of 218 genes each. Among the three studies, the ranked gene lists failed to demonstrate significant overlap of the most overexpressed and underexpressed genes . Rank-rank Hypergeometric Overlap (RRHO) was next applied as a threshold-free comparison of the studiesPatients from each study were then joined to evaluate whether unsupervised techniques would separate active patients from inactive patients. Expression profiles from each study were first normalized to have zero mean and unit variance. Figure\u00a02.We hypothesized that patterns of enrichment of Weighted Gene Co-expression Network Analysis (WGCNA) modules derived from isolated cell populations that are correlated to the SLEDAI SLE disease activity index might be more useful than gene expression across studies to identify active versus inactive lupus patients. To characterize the relationships between SLE gene signatures from various peripheral cellular subsets and disease activity, WGCNA was used to generate co-expression gene modules from purified populations of cells from subjects with active SLE, which could subsequently be tested for enrichment in whole blood of other SLE subjects. WGCNA analysis of leukocyte subsets resulted in several gene modules with significant Pearson correlations to SLEDAI . CD4, CD14, CD19, and CD33 cells yielded 3, 6, 8, and 4 modules significantly correlated to disease activity, respectively (Table\u00a0Gene Ontology (GO) analysis of the genes within each module showed that some processes, such as those related to interferon signaling, RNA transcription, and protein translation, were shared among cell types, whereas other processes were unique to certain cell types Table\u00a0 and might-test, p\u2009<\u20090.05) enrichment was carried out using the 25 cell-specific gene modules Fig.\u00a0. Of the 5) Table\u00a0. NotablyAnalysis of individual disease activity-associated peripheral cellular subset gene modules was not sufficient to predict disease activity in unrelated WB data sets, since no single module from any cell type was able to separate active from inactive SLE patients Fig.\u00a0. The resk-nearest neighbors (KNN), and random forest (RF) classifiers. Classifiers were validated using two different methodologies: (1) 10-fold cross-validation or (2) study-based cross-validation, in which classifiers were trained on each data set independently and tested in the other two data sets. When evaluating the performance of classifiers on the data set on which they were trained, GLM accuracy was defined as one minus the cross-validated classification error from the cv.glmnet function, and RF accuracy was determined based on out-of-bag predictions. The accuracy of each classifier trained with either gene expression or module enrichment is shown in Fig.\u00a0To assess the effectiveness of either raw gene expression or module-based enrichment techniques, SLE patients were classified as active or inactive using generalized linear models (GLM), When performing 10-fold cross-validation, the use of gene expression values resulted in better performance from all three classifiers compared to module enrichment scores. The random forest classifier was the strongest performer with 83 percent accuracy, and its corresponding ROC curve demonstrated an excellent tradeoff between recall and fall-out (AUC 0.89). This high accuracy can likely be attributed to the presence of data from all three studies in both the training and test sets. In this case, the classifiers have the opportunity to learn patterns inherent to each data set, which proves useful during testing. To ensure that the classifiers were not disproportionately learning patterns from certain data sets at the expense of others, the classification results from the 10-fold cross-validation approach were subdivided by data set. All classifiers exhibited good performance with small differences between their highest and lowest accuracies in individual data sets, with the exception of the WGCNA-based KNN classifier . This suggests that CD14+ monocytes express unique genes that may play important roles in the initiation of SLE activity.Several important findings related to SLE gene expression heterogeneity within and across data sets have been elucidated by this study. First, we demonstrated that DE analysis of active versus inactive patients is insufficient for proper classification of SLE disease activity, as systematic differences between data sets render conventional bioinformatics techniques largely non-generalizable.Next, we hypothesized that WGCNA modules created from the cellular components of WB and correlated to SLEDAI disease activity might improve classification of disease activity in SLE patients. The use of cell-specific gene modules based on a priori knowledge about their relevance to disease fared slightly better than raw gene expression, as it generated informative enrichment patterns, and many of the modules maintained significant correlations with SLEDAI in WB. However, these enrichment scores failed to separate active patients from inactive patients completely by hierarchical clustering.k-nearest neighbors, and random forest classifiers. The trends in performance when cross-validating by study or cross-validating 10-fold speak to the potential advantages and disadvantages of diagnostic tests incorporating gene expression data or module enrichment. Cross-validating by study serves as a kind of \u201cworst-case\u201d scenario, whereas 10-fold cross-validation serves as a \u201cbest-case.\u201d Attempting to classify active and inactive SLE patients from different data sets and different microarray platforms during cross-validation by study proved difficult, but module enrichment was able to smooth out much of the technical variation between data sets. 10-fold cross-validation simulated a more standardized diagnostic test. Although the data was sourced from three different microarray platforms, each cohort in the test set had many similar patients in the training set to facilitate classification by gene expression. If such a test could be reliably free from technical noise, it is likely that raw gene expression would perform very well. RNA-Seq platforms, which produce transcript counts rather than probe intensity values, may display less technical variation across data sets because they are not dependent on the binding characteristics of pre-defined probes that differ among arrays34. On the other hand, comparison of RNA-Seq and microarray samples has shown that the two methods can deliver highly consistent results37, so a microarray-based test could be feasible if it was only conducted on one platform. Further study to construct an optimal panel of genes similar to that identified by the random forest classifier could result in a simple, focused test to determine disease activity by gene expression data alone. Interestingly, module enrichment scores, which show little variation across platforms, could be used to develop diagnostic tests that leverage existing data sets, even if they are sourced from different platforms.We then compared raw expression data alongside the WGCNA generated modules of genes in machine learning applications. We used a supervised classification approach using elastic generalized linear modeling, The strong performance of the random forest classifier indicates that nonlinear, decision tree-based methods of classification may be best suited to SLE diagnostics. This may be because decision trees ask questions about new samples sequentially and adaptively in contrast to other methods that approach variables from new samples all at once. Random forest is able to \u201cunderstand\u201d to an extent that different types of patients exist and that a one-size-fits-all approach will tend to misclassify those patients whose expression patterns make them a minority within their phenotype. To put it more simply, active patients that do not resemble the majority of active patients still have a strong chance of being properly classified by random forest.We used the random forest classifier to assess the importance of each gene and module in patient classification. The most important genes were involved in a number of functions other than interferon signaling, such RNA processing, ubiquitylation, and mitochondrial processes. These pathways may play important roles in directing, or at least be indicative of, SLE disease activity. CD4 T cells originally contributed the most important modules, but when the modules were deduplicated, CD14 monocyte-derived modules gained importance. This suggests that unique genes expressed by CD14 monocytes in tandem with interferon genes may prove to be informative in the study of cell-specific methods of SLE pathogenesis. Futhermore, it is important to note that modules that were negatively associated with disease activity were just as important in classification as positively associated modules. Further study of underrepresented categories of transcripts should enhance our understanding of SLE activity.One limitation of this study was the relatively small amount of data used to train and test the classifiers. Creating dedicated training and test sets is preferable to cross-validation, but it requires many samples. Although there are large numbers of publicly available gene expression profiles of SLE patients, many of these profiles are not annotated with SLEDAI data. Furthermore, some data sets which include SLEDAI data show heavy class imbalance, which impedes classification. Further work to integrate cross-platform expression data will be crucial to expanding our ability to classify active and inactive SLE patients.The machine learning models tested here provide the basis of personalized medicine for SLE patients. Integration of our approaches with emerging high-throughput patient sampling technologies could unlock the potential to develop a simple blood test to predict SLE disease activity. Our approaches could also be generalized to predict other SLE manifestations, such as organ involvement. A better understanding of the cellular processes that drive SLE pathogenesis may eventually lead to customized therapeutic strategies based on patients\u2019 unique patterns of cellular activation.Publicly available gene expression data and corresponding phenotypic data were mined from the Gene Expression Omnibus. Raw data sources for purified cell populations are as follows: GSE10325 ; GSE26975 ; GSE38351 . Raw data sources for SLE whole blood gene expression are as follows: GSE39088 ; GSE45291 ; GSE49454 . 35 randomly sampled inactive patients were taken from GSE45291 to avoid a major imbalance between active and inactive SLE patients. Active SLE was defined as having an SLE Disease Activity Index (SLEDAI) of 6 or greater.Statistical analysis was conducted using R and relevant Bioconductor packages. Non-normalized arrays were inspected for visual artifacts or poor hybridization using Affy QC plots. PCA plots were used to inspect the raw data files for outliers. Data sets culled of outliers were cleaned of background noise and normalized using RMA, GCRMA, or NEQC where appropriate. Data sets were then filtered to remove probes with low intensity values and probes without gene annotation data. WB gene expression data sets were filtered to only include genes that passed quality control in all data sets. At this juncture, differential expression (DE) analysis and Weighted Gene Co-expression Network Analysis (WGCNA) were carried out on data sets. WB gene expression data sets were then further processed before machine learning analysis. WB gene expression values were centered and scaled to have zero-mean and unit-variance within each data set, and the standardized expression values from each data set were joined for classification.38. Resulting p-values were adjusted for multiple hypothesis testing using the Benjamini-Hochberg correction39, which resulted in a false discovery rate (FDR). Significant genes within each study were filtered to retain DE genes with an FDR\u2009<\u20090.2, which were considered statistically significant. The FDR was selected a priori to diminish the number of genes that might be excluded as false negatives. Rank-rank hypergeometric overlap between data sets was assessed using the RRHO R package40. Additional analyses examined differentially expressed genes with an FDR\u2009<\u20090.05.Normalized expression values were variance corrected using local empirical Bayesian shrinkage, and DE was assessed using the LIMMA R package41. For each experiment, an approximately scale-free topology matrix (TOM) was first calculated to encode the network strength between probes. Probes were clustered into WGCNA modules based on TOM distances. Resultant dendrograms of correlation networks were trimmed to isolate individual modular groups of probes by partitioning around medoids and labeled using color assignments based on module size. Expression profiles of genes within modules were summarized by a module eigengene (ME), which is analogous to the module\u2019s first principal component. MEs act as characteristic expression values for their respective modules and can be correlated with sample traits such as SLEDAI or cell type. This was done by Pearson correlation for continuous or semi-continuous traits and by point-biserial correlation for dichotomous traits.Log2-normalized microarray expression values from purified CD4, CD14, CD19, CD33, and low density granulocyte (LDG) populations were used as input to WGCNA to conduct an unsupervised clustering analysis, resulting in co-expression \u201cmodules,\u201d or groups of densely interconnected genes which could correspond to comparably regulated biologic pathwaysWGCNA modules from CD4, CD14, CD19, and CD33 cells were tested for correlation to SLEDAI. SLEDAI information was not available for the LDG modules, so the two modules provided are descriptive of LDGs compared to SLE neutrophils and HC neutrophils.2.Plasma cell modules were generated by differential expression analysis and not WGCNA, but were included because of the established importance of plasma cells in SLE pathogenesis and their increase in active disease42 was used as a non-parametric method for estimating the variation of pre-defined gene sets in SLE WB gene expression data sets. Standardized expression values from WB data sets were used to test for enrichment of cell-specific WGCNA gene modules using the Single-sample Gene Set Enrichment Analysis (ssGSEA) method, which scores single samples in isolation and is thus shielded from technical variation within and among data sets. Statistical analysis of GSVA enrichment scores was done bv Spearman correlation or Welch\u2019s unequal variances t-test, where appropriate. Effect sizes were assessed by Cohen\u2019s d43.The GSVA R package46.We employed three distinct machine learning algorithms to test biased and unbiased approaches to microarray data analysis. The biased approach involved GSVA enrichment of disease-associated, cell-specific modules, and the unbiased approach employed all available gene expression data in the WB. An elastic generalized linear model (GLM), k-nearest neighbors classifier (KNN), and random forest (RF) classifier were deployed to classify active and inactive SLE patients and determine whether gene expression could serve as a general predictor of disease activity. GLM, KNN, and RF were deployed using the glmnet, caret, and randomForest R packages, respectivelyk of known samples. K was set to 5% of the size of the training set. If the initial value of k was even, 1 was added in order to avoid ties. RF generates 500 decision trees which vote on the class of each sample. The Gini impurity index, a measure of misclassification error, was used to evaluate the importance of variables47.GLM carries out logistic regression with a tunable elastic penalty term to find a balance between the L1 (lasso) and L2 (ridge) penalties and thereby facilitate variable selection. For our predictions, the elastic penalty was set to 0.9, specifying a penalty that is 90% lasso and 10% ridge in order to generate sparse solutions. KNN classifies unknown samples based on their proximity to a set number 48.The performance of each machine learning algorithm was evaluated by 2 different forms of cross-validation. First, a random 10-fold cross-validation was carried out by randomly assigning each patient to one of 10 groups. For each pass of cross-validation, one group was held out as a test set, and the classifiers were trained on the remaining data. Next, as the data came from three separate studies, study-based cross-validation was also done to determine the effects of systematic technical differences among data sets on classification performance. In this circumstance, the classifiers were trained on one data set and tested in the other two data sets. Accuracy was assessed as the proportion of patients correctly classified across all testing folds. Performance metrics such as sensitivity and specificity were assessed after cross-validation by agglomerating class probabilities and assignments from each fold or study. Receiver Operating Characteristic (ROC) curves were generated using the pROC R packageSupplementary InformationDataset 1Dataset 2Dataset 3"} +{"text": "Diffuse large B-cell lymphoma (DLBCL) is the most common form of non-Hodgkin lymphoma (NHL) . ApproxiA 60-year-old male presented with a 5-month history of a rapidly growing mass in his left buttock accompanied by intense pain and impaired mobilization. He denied weight loss, fever, or night sweats. Physical examination revealed a firm, tender left buttock mass, measuring 19x13 cm . No palpExtranodal lymphomas (ENLs) are defined as those with no/minimal nodal involvement associated with a dominant extranodal component . HoweverInvolvement of the skeletal muscles in NHL is unusual and has been reported to occur in 1.1% of patients. The most common route of muscle involvement is hematogenous, lymphatic, or by contiguous spread, or, very rarely, as a primary extranodal disease . The mosThe main symptoms include the presence of a mass with progressive enlargement, pain, and swelling . ImagingDifferential diagnosis includes soft tissue sarcoma, metastatic carcinoma, and neurogenic tumors such as malignant peripheral nerve sheath tumors . No spec"} +{"text": "S-adenosylmethionine (rSAM) enzymes, glycyl radical enzymes (GREs), and diiron enzymes. These enzymes catalyze various reactions that yield products of industrial relevance , making their incorporation into engineered metabolic pathways enticing. Elucidating the mechanisms of radical enzymes that cleave and construct C\u2014C bonds will enable further enzyme discovery and engineering efforts.Radical enzymes catalyze some of the most chemically challenging C\u2014C bond-forming and bond-breaking reactions. Advances in DNA sequencing have accelerated the discovery of radical enzymes from microbes, including radical Current Opinion in Biotechnology 2020, 65:94\u2013101Chemical BiotechnologyThis review comes from a themed issue on Christoph Wittmann and Sang Yup LeeEdited by Issue and the EditorialFor a complete overview see the Available online 1st April 2020https://doi.org/10.1016/j.copbio.2020.02.003http://creativecommons.org/licenses/by/4.0/).0958-1669/\u00a9 2020 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY license enzymes, glycyl radical enzymes (GREs), and diiron enzymes are a diverse class of natural products that have a variety of bioactivities including disrupting protein-protein interactions . During S-adenosylmethionine (rSAM) enzymes are among the most prolific RiPP tailoring enzymes and are responsible for many characteristic structural modifications including thioether bridge and carbon\u2013carbon bond formation, methylation, epimerization, and complex rearrangements clusters cluster, as summarized in Follow-up studies focusing on AgaB and SuiB explored whether a radical addition or electrophilic aromatic substitution mechanism is employed \u2022. Substr\u03b4-carbon of an Arg side chain to the ortho-position of a tyrosine-phenol in the assembly of ryptides benzindole moiety \u2022. More rryptides . Unlike 12-dependent rSAM enzymes install methyl groups onto unactivated carbon centers of Trp and Val in other RiPP pathways [In addition to these RiPP-associated rSAMs, cobalamin Bpathways , and havpathways ) and C\u2014Spathways , CteB Cypathways ). A deep2-sensitive enzymes uses a conserved glycine-centered radical to catalyze difficult chemical transformations [Aromatic hydrocarbons, such as toluene, are present in groundwater as a consequence of natural processes and human pollution. These molecules are resistant to oxidation and de-aromatization, making bioremediation difficult. However, various microbes have evolved anaerobic pathways for hydrocarbon degradation that employ glycyl radical enzymes (GREs). This family of Ormations . The stain vitro kinetic parameters are difficult to assess, but BSS is estimated to have a KM of <100\u00a0\u03bcM and Vmax of 7.4\u00a0nmol\u00a0min\u22121\u00a0mg\u22121 [2 makes them unsuitable for use in soils that have anaerobic zones [A subset of GREs, collectively known as the x-succinate synthases, catalyze C\u2014C bond formation between a variety of unactivated hydrocarbons and fumarate, enabling further metabolism a. Many on\u22121\u00a0mg\u22121 . Recentln\u22121\u00a0mg\u22121 ,19. In an\u22121\u00a0mg\u22121 ,21. Contic zones ,23.Figurp-hydroxyphenylacetate, phenylacetate, and indoleacetate into p-cresol, toluene, and skatole, respectively , phenylaectively c. Each oytoluene . 60% of coal tar . Toluenecoal tar .Clostridioides difficile, a known p-cresol producer. PAD was discovered from two anoxic, toluene-producing microbial communities: municipal sewage sludge and lake sediments from Berkeley, CA. Considering functional similarities to HPAD, PAD was postulated to be a GRE. Through activity-guided protein fractionation and metagenomic and metaproteomic data analysis of active fractions, a putative GRE decarboxylase was identified. In vitro experiments showed that this enzyme was indeed PAD [These C\u2014C bond-cleaving GREs were each discovered using different methods . HPAD wadeed PAD . IAD wasdeed PAD .p-hydroxyphenylacetate in the active site of a crystal structure [para-hydroxyl group. Instead of oxidizing the carboxylate, the thiyl radical is postulated to abstract an H-atom from the \u03b1-methylene. Heterolytic decarboxylation of the resulting benzylic radical, protonation by a nearby His (identified only through homology modelling), and H-atom abstraction from the Cys yields toluene [Although each GRE catalyzes a conceptually similar decarboxylation reaction, it is presently unclear whether they employ related mechanisms d. The HPtructure as well tructure . PAD is toluene \u2022\u2022 , pote5\u20132.8\u00a0mM ,33; IAD\u2019\u00a00.37\u00a0mM ). A bett\u00a00.37\u00a0mM . 3HPA ca\u00a00.37\u00a0mM and 1,3-\u00a00.37\u00a0mM . Importacofactor , 36, 37.n fatty aldehydes, generating the corresponding Cn\u22121 hydrocarbon that can be used as petrodiesel [x fatty acids to Cx\u22121 terminal olefins that can be directly employed as liquid fuel or industrial feedstocks [II ions ligated by protein residues [Diiron enzymes catalyze C\u2014C cleavage reactions through oxidative and oxygenative mechanisms. For example, aldehyde-deformylating oxygenase (ADO) uses molecular oxygen to catalyze C1\u2014C2 bond cleavage of free saturated or monounsaturated Crodiesel . In the edstocks . This rePseudomonas strains known to produce 1-undecene [In vitro biochemical characterization showed it converts medium-chain fatty acids into the corresponding terminal olefins. Under conditions of multiple turnover using ascorbate as an external reductant, the rate for UndA in vitro (0.0011\u00a0s\u22121) was much lower than the initial rates under single turnover conditions (0.06\u00a0s\u22121). Nevertheless, simple overexpression of UndA homologs in Escherichia coli led to titers 25-fold higher than in Pseudomonas [Pseudomonas fluorescens Pf-5 UndA had only a single Fe atom bound in close proximity to the \u03b2-H-atom of a substrate analog, leading to the proposal that it was a mononuclear non-heme Fe enzyme [P. syringae pv. tomato was crystalized and found to have conserved ligands consistent with binding two Fe atoms, although no Fe was bound in the crystal structure [UndA was first identified by screening a fosmid library from various undecene . In vitrudomonas . The inie enzyme . More retructure . HoweverP. fluorescens UndA corroborated these findings [2 activation was detected and determined to be a \u03bc-peroxo-Fe2(III/III) complex using a combination of stopped-flow UV\u2013vis and freeze-quench M\u00f6ssbauer spectroscopy. An Fe2(III/IV) complex also accumulated in this experiment although it is unclear if this species is on pathway. One current mechanistic proposal is reported in Additional experiments with the findings \u2022. In thiThere is great utility in continuing to discover and study radical enzymes, as they catalyze an abundance of difficult chemical transformations not readily achieved by traditional synthetic chemistry. Many radical enzymes already produce compounds of great value (e.g. hydrocarbons produced by diiron enzymes), and future engineering efforts could further expand the scope of products accessed as well as improve enzyme activity. Realizing the potential of radical enzymes in engineered pathways will require continued progress in elucidating the mechanisms of characterized enzymes and expanded efforts to discover new family members.Nothing declared.\u2022 of special interest\u2022\u2022 of outstanding interestPapers of particular interest, published within the period of review, have been highlighted as:"} +{"text": "Two-dimensional materials including TMDCs, hBN, graphene, non-layered compounds, black phosphorous, Xenes and other emerging materials with large lateral dimensions exceeding a hundred micrometres are summarised detailing their synthetic strategies.Crystal quality optimisations and defect engineering are discussed for large-area two-dimensional materials synthesis.Electronics and optoelectronics applications enabled by large-area two-dimensional materials are explored.. Large-area and high-quality two-dimensional crystals are the basis for the development of the next-generation electronic and optical devices. The synthesis of two-dimensional materials in wafer scales is the first critical step for future technology uptake by the industries; however, currently presented as a significant challenge. Substantial efforts have been devoted to producing atomically thin two-dimensional materials with large lateral dimensions, controllable and uniform thicknesses, large crystal domains and minimum defects. In this review, recent advances in synthetic routes to obtain high-quality two-dimensional crystals with lateral sizes exceeding a hundred micrometres are outlined. Applications of the achieved large-area two-dimensional crystals in electronics and optoelectronics are summarised, and advantages and disadvantages of each approach considering ease of the synthesis, defects, grain sizes and uniformity are discussed. Synthesis of high-quality and atomically thin materials in large areas is a subject of an intensive and ongoing investigation. Controllable growth of ultrathin two-dimensional 2D) materials in large areas enables design and integration of electronics devices with complex components, providing enhanced interfaces for optical and heterostructure devices . DetrimeD materiaAdvantages and disadvantages of synthetic approaches considering challenges in thickness control and the resultant crystal quality are discussed by characterising the defects, disorders and grain sizes. Finally, the overview of applications in electronics and optoelectronics exploited by printing large-area materials in 2D are provided.2 and NbSe2 in 2005 isolated in 2D below 100\u00a0\u00b5m in lateral dimensions ..2 with pion step . Keller ion step exploredion step isolatedand MoS2 and repoand MoS2 . Using tand MoS2 exfoliatand MoS2 . Mechanier TMDCs .Fig.\u00a03LaThis section presents achievement of the large-area high-quality TMDCs crystals readily available to be incorporated into practical industrial applications. Many of these methods investigate the growth or isolation of single TMDCs; however, further, development is needed to produce heterojunctions and Janus structures in large-scale as both of these two types of structures are of great interest for high-performance electronic and optical applications \u201390. EnlahBN has been widely investigated in fundamental science and used for device applications as an insulator, gate-dielectric, passivation layer, tunnelling layers, contact resistance, charge fluctuation reduction and Coulomb drag . There a2 single-crystal domains . G. G39]. Gene Fig.\u00a0b 101]. . 0.65Ge0\u221210 torr) with a maximum achieved single crystal with areas of up to 100\u00a0\u00b5m2. However, compared with other 2D Xenes, borophene has yet to achieve lateral dimensions exceeding tens of micrometres . . 2 [2 morisation . PLD metity Fig.\u00a0f 33].Fi.Fi2 [2 m2S3 in a single unit cell exceeding 200\u00a0\u00b5m. This material feature air-stable p-type semiconductor ferromagnet with intriguing properties. Yu et al. synthesised 2D VSe2 using exfoliation electrochemically to produce atomically thin layers with strong ferromagnetic properties at high curie temperatures for potential memory device applications [Emerging 2D magnetic materials for potential application in spintronics, valleytronics and twistronics with large lateral dimensions have rarely been realised. Chu et al. synthesiications . Develop2, TaS2 and TaSe2 nanosheets by damaging the crystal and oxidisation [The quest for the synthesis of large-area atomically thin 2D materials with uniform thicknesses and minimum structural defects has effectively led to many successful reports and emerging strategies. This topic is the subject of extensive and ongoing research presenting several performance and scalability challenges to be adopted by industry. One major drawback in the development of large-area high-quality 2D materials is the lack of spectroscopic solutions for analysing the quality of the obtained large-area 2D materials in atomic resolution in a single measurement. Current methods to capture HRTEM at atomic resolution for centimetre-scale 2D materials are performed through stitching images and locally verifying the grain boundary sizes. In addition, electron irradiation during TEM has found to introduce defects in 2D materials even at relatively low acceleration voltages of 80 and 60\u00a0kV , 152. Bedisation , 180. Th2 have been achieved by CVD on a molten glass as a substrate with lateral dimensions of more than half a millimetre featuring high performances [3 [3. LM seems to be a frontier in 2D oxide synthesis with uniform thicknesses [Among synthesis methods, top\u2013down approaches, such as ME, are low cost and produce high-quality exfoliated 2D sheets exceeding half a millimetre in lateral dimensions, however, lacking scalability and yield . Successormances . CVD metormances . Compariormances . MBE metormances , 41. Fewormances , 32. MOCormances , 34. Recormances , 182. Inances [3 and consances [3 have beecknesses . HoweverLarge-area synthesis of 2D materials has substantial implications for industrial uptake which has evolved to a fast-developing field of science. The recent development in the field of quantum computing will push the materials science explorations to optimise high-quality and large-scale synthesis of 2D materials systems featuring topological states, superconductivity and spin polarizability sites. There is nonetheless a vast scope for enhancing current technologies and developing emerging synthetic techniques."} +{"text": "Rational design of parent zeolites with concentrated and non\u2010protective coordination of Al species can facilitate post\u2010synthetic treatment to produce mesoporous ZSM\u20105 nanoboxes. In this work, a simple and effective method was developed to convert parent MFI zeolites with tetrahedral extra\u2010framework Al into Al\u2010enriched mesoporous ZSM\u20105 nanoboxes with low silicon\u2010to\u2010aluminium ratios of \u224816. The parent MFI zeolite was prepared by rapid ageing of the zeolite sol gel synthesis mixture. The accessibility to the meso\u2010micro\u2010porous intra\u2010crystalline network was probed systematically by comparative pulsed field gradient nuclear magnetic resonance diffusion measurements, which, together with the strong acidity of nanoboxes, provided superb catalytic activity and longevity in hydrocarbon cracking for propylene production.ZSM\u20105 zeolite nanoboxes with accessible ZSM\u20105 nanoboxes with low SARs: Rapid ageing of a sol gel synthesis mixture produces a parent zeolite with tetrahedral extra framework Al and a low silicon\u2010to\u2010aluminium ratio (SAR) value of \u224812, which is transformed to mesoporous ZSM\u20105 nanoboxes with low SARs of \u224816 during post\u2010synthetic TPAOH treatment. The mesoporous ZSM\u20105 nanoboxes show comparatively enhanced mass transfer ability, catalytic cracking activity and longevity. For parent ZSM\u20105 zeolites with low silicon\u2010to\u2010aluminium ratios (SAR) of <20 , post\u2010synthetic alkaline treatments are not effective for the formation of mesoporous features, and sequential fluorination\u2010desilication and steaming\u2010desilication for creating mesoporous structures in the ZSM\u20105 zeolites are necessary.meso\u2010micro\u2010pore architecture and concentrated Br\u00f8nsted acidity.The use of structural directing agents (SDAs), especially tetrapropylammonium hydroxide (TPAOH), in the modification of MFI zeolites physisorption, solid\u2010state nuclear magnetic resonance (NMR) spectroscopy and ammonia temperature programmed desorption (NH3\u2010TPD) analysis at different stages of the synthesis. Notably, the obtained ZSM\u20105 nanoboxes possess significantly high Al concentration (SAR of \u224816) in its shell (\u224820\u2005nm) and mesoporous features . The percolation diffusion of probing molecules within the materials was assessed by pulsed\u2010field gradient NMR (PFG\u2010NMR) measurements . Previous research has shown that PFG\u2010NMR is a powerful tool of investigating the mass transport in zeolites with mesoporous featuresn\u2010octane and cumene) is demonstrated, showing excellent activity and selectivity to propylene due to the unique combination of pore structural features and chemical properties . The simple and effective strategy solves the challenge of preparing mesoporous ZSM\u20105 zeolites with low SAR values.Herein, we report a simple yet effective method to synthesize mesoporous ZSM\u20105 nanoboxes with the low SAR value of \u224816. The method involves (i) the synthesis of a parent zeolite with tetrahedral extra\u2010framework Al (EFAL) and (ii) post\u2010synthetic treatment of the parent zeolite with TPAOH solution show the single peak at about 24.4\u00b0 as well >98\u2009%, Figure\u2005S2b) and mesoporous features . By extending the treatment time beyond 12\u2005h, excessive dissolution occurred, damaging the intactness of the hollow structure to certain extents (as evidenced by transmission electron microscopy (TEM) and scanning electron microscopy (SEM) analysis, Figures\u2005S4 and S5) and variation in mesoporosity of ZSM\u20105\u2010P zeolites (Table\u2005S1). The excessive dissolution due to the prolonged treatment time (>12\u2005h) was also reflected by the reduced RC values of the relevant ZSM\u20105\u2010P zeolites .For the first time, the rapid ageing of the sol gel synthesis mixture was explored to produce the as\u2010synthesized parent MFI zeolite (AS\u2010MFI) with tetrahedral EFAL (see Supporting Information for details). Comparison of X\u2010ray powder diffraction (XRD) patterns of AS\u2010MFI and conventional ZSM\u20105 (C\u2010ZSM\u20105) is shown in Figure\u200527Al magic angle spinning (MAS) NMR analysis on the external surface during the post\u2010treatment (6\u2005h to 96\u2005h) produced zeolite nanoboxes with Al\u2010rich walls . X\u2010ray photoelectron spectroscopy (XPS) also shows that the surface SARs of ZSM\u20105\u2010P zeolites are lower than the respective bulk ones detected by EDX and ICP (Table\u2005S1), suggesting the occurrence of Al redistribution during the TPAOH treatment.Post\u2010treatments using SDAs, especially TPAOH, are known to be effective to recover the dissolved species to a certain extent and to form hollow MFI zeolites with controlled properties such as wall thickness.SBET=375\u2005m2\u2009g\u22121) with uniform sizes of about 300\u2013500\u2005nm (as shown by the High resolution TEM (HRTEM) and scanning transmission electron microscopy (STEM) analysis, Figures\u2005m TPAOH, at 160\u2009\u00b0C for <12\u2005h) were suitable to produce the regular ZSM\u20105 nanoboxes with uniform cavities, which can be attributed to the preferential desilication of the siliceous part of AS\u2010MFI and recrystallization of dissolved Si and EFAL. The shell thickness of ZSM\u20105\u2010P zeolites is about 20\u2005nm adsorption\u2010desorption analysis =113\u2013149\u2005m2\u2009g\u22121 and mesopores volumes (Vmeso.)=0.16\u20130.26\u2005cm3\u2009g\u22121. Using concentrated SDA in the post\u2010treatment (24\u2005h) was not beneficial to the formation of mesoporous hollow structures , as well as reducing the crystallinity of the resulting zeolites , which is evidenced by various characterization data of the materials . This is again due to the fast and excessive dissolution, which suppresses the recrystallization rate, making the formation of mesoporous hollow structures challenging. The RC value of ZSM\u20105\u2010P\u20100.5\u201024 from the treatment using 0.5\u2009m TPAOH aqueous solution was only \u224851\u2009% (Table\u2005S2), suggesting significant loss of crystallinity due to the fast dissolution. N2 physisorption analysis also shows that the hysteresis loops of ZSM\u20105\u2010P\u20100.3\u201024 and ZSM\u20105\u2010P\u20100.5\u201024 zeolites are less significant in comparison to that of ZSM\u20105\u2010P zeolites . In summary, post\u2010treatment with TPAOH solution is effective to revive EFAL in the parent AS\u2010MFI, converting it into framework Al in ZSM\u20105\u2010P zeolites. However, the balance of dissolution and recrystallization needs to be regulated (by varying the treatment time and the concentration of aqueous TPAOH solution) in order to obtain well\u2010defined crystalline nanoboxes. C\u2010ZSM\u20105 shows typical features of conventional ZSM\u20105 zeolites which were characterized as presented in Table\u2005S2. As discussed above, the post\u2010synthetic treatment of C\u2010ZSM\u20105 did not result in the development of mesoporous structures due to the abundant presence of framework Al as shown by solid state NMR , inhibiting the effective dissolution of Si, and resulting in the post\u2010treated ZSM\u20105 zeolite ) with limited and irregular mesopores as shown by SEM and TEM .AS\u2010MFI is primarily microporous , which is beneficial to catalysis. Due to the absence of Al\u2212O\u2212Si sites in AS\u2010MFI, its strong acidity is insignificant at 24.1\u2005mmol\u2009g\u22121, as shown in Figure\u2005\u22121), reflecting the reconstruction of EFAL to tetra\u2010coordinated framework Al during the post\u2010treatment. Regarding the acidity of C\u2010ZSM\u20105, it is comparable to that of ZSM\u20105\u2010P nanoboxes . However, the post\u2010treatment of C\u2010ZSM\u20105 (to P\u2010C\u2010ZSM\u20105\u20100.1 (6)) reduced the strong acidity by ca. 28\u2009% .Al\u2010rich ZSM\u20105\u2010P nanoboxes show improved acidity, especially strong acidity corresponding to Br\u00f8nsted acidity favor relevant zeolite catalyzed reactions such as propylene\u2010selective catalytic cracking. ZSM\u20105 is a widely used additive in cracking catalysis for improving propylene selectivity,n\u2010octane (kinetic diameter (KD)=0.43\u2005nm)n\u2010octane over different zeolites at 540\u2009\u00b0C as a function of time\u2010on\u2010stream (ToS). AS\u2010MFI shows insignificant activity compared to other catalysts due to the lack of framework Al, and thus Br\u00f8nsted acidity . Although the microporous C\u2010ZSM\u20105 presented the highest initial activity , it deactivated gradually and significantly over time . The deactivation of C\u2010ZSM\u20105 was due to coke formation on the external surface of the crystals, which was the result of the diffusion resistance caused by the pure microporous framework of C\u2010ZSM\u20105, leading to the loss of accessibility and acidity . Conversely, the ZSM\u20105\u2010P\u20100.1\u20106 nanoboxes promoted the diffusion of n\u2010octane through their newly formed percolation pore network, being highly stable regarding both n\u2010octane conversion (at ca. 73\u2009%) and selectivity to propylene . C\u2010HO\u2010ZSM\u20105 with the mesoporous hollow structure showed a stable catalytic performance in cracking n\u2010octane as well. However, due to the low concentration of strong acidity in C\u2010HO\u2010ZSM\u20105 (at 214.4\u2005mmol\u2009g\u22121), it was outperformed by ZSM\u20105\u2010P\u20100.1\u20106 nanoboxes by \u2248130\u2009% regarding n\u2010octane conversion. The used ZSM\u20105\u2010P\u20100.1\u20106 (denoted as ZSM\u20105\u2010P\u20100.1\u20106\u2010U) can be regenerated by calcination at 550\u2009\u00b0C under 10\u2005vol.\u2009% O2 in N2. The regenerated ZSM\u20105\u2010P\u20100.1\u20106 showed comparable chemical, physical and catalytic properties, as shown in Figures\u2005S19\u2013S21 and Tables\u2005S7,S8).Comparative catalytic evaluation of ZSM\u20105\u2010P\u20100.1\u20106 along with the control catalysts AS\u2010MFI, C\u2010ZSM\u20105, and a conventional hollow ZSM\u20105 nanoboxes with a SAR value of \u224845 was performed using cracking reactions with 2 physisorption and NH3\u2010TPD analysis of the used zeolite catalysts, Figure\u2005S22, Tables\u2005S9,S10). By comparing the two ZSM\u20105 nanoboxes under study, ZSM\u20105\u2010P\u20100.1\u20106 showed remarkably better activity than C\u2010HO\u2010ZSM\u20105 on stream of 70\u2005h . More importantly, although with a low SAR value of \u224816, ZSM\u20105 nanoboxes remained stable as well, as evidenced by the comparable XRD and N2 physisorption analysis of the used ZSM\u20105\u2010P\u20100.1\u20106 before and after steam ageing . The specific micropore surface area (Smicro) of the used ZSM\u20105\u2010P\u20100.1\u20106 after steam ageing dropped by ca. 6\u2009% (from 415\u2005m2\u2009g\u22121 to 391\u2005m2\u2009g\u22121), while the RC values remained comparable at \u224885\u2009%.The accessibility issue was substantial when relatively bulky cumene (KD=0.68\u2005nm)meso\u2010micro\u2010porous structure of ZSM\u20105\u2010P nanoboxes contributes to the measured catalytic activity; this was experimentally confirmed by PFG\u2010NMR measurements carried out at a 1H frequency of 43\u2005MHz, with a diffusion probe capable of producing magnetic field gradient pulses up to 163\u2005mT\u2009m\u22121, at atmospheric pressure and 25\u2009\u00b0C. The mass transport properties of the zeolites under study measured by PFG\u2010NMR are shown in Figure\u2005g, Figure\u2005S1). According to Eq.\u2005(S1), the NMR signal attenuation of PFG\u2010NMR experiments as a function of the gradient strength, E(g), is related to the experimental variables and the diffusion coefficient . PFG\u2010NMR plots, that is, log\u2010attenuation plots as shown in Figures\u2005n\u2010octane/cumene and 500\u2005ms for TIPB). Therefore, the calculated diffusion coefficients (D) represent the averaged molecular diffusivity across the whole zeolite particle.D of the guest molecules being studied), as shown in Table\u2005S13. Interestingly, PFG\u2010NMR measurements showed that D values of the probing molecules in ZSM\u20105\u2010P\u20100.1\u20106 were the smallest in comparison with the microporous AS\u2010MFI and C\u2010ZSM\u20105. As a consequence, the pore network tortuosity, defined as the ratio of the bulk diffusivity of the guest molecule and that of the same molecule within the pore space, [\u03c4, Figure\u2005n\u2010octane (0.43\u2005nm) and cumene (0.68\u2005nm), for ZSM\u20105\u2010P\u20100.1\u20106, the comparatively small value of D and large value of \u03c4, suggest that the developed method created a new percolating network within ZSM\u20105\u2010P\u20100.1\u20106 zeolite crystals. As a result, the probing molecules gain access to the newly formed percolating network in the intra\u2010crystalline pores, which is more tortuous than the inter\u2010crystalline space, hence leading to lower values of the averaged measured diffusion coefficient due to increased collisions with the intra\u2010crystalline pore walls. Conversely, for the microporous AS\u2010MFI and C\u2010ZSM\u20105, the probing molecules diffuse primarily within the inter\u2010crystalline pore space . To prove this further, PFG\u2010NMR experiments were carried out using a bulky molecule, TIPB (kinetic diameter=0.94\u2005nm),n\u2010octane and cumene, PFG\u2010NMR plots of the materials are comparable for all the zeolite samples . Such finding can be clearly explained by considering the larger size of TIPB (in comparison with n\u2010octane and cumene), which hinders the access to the newly formed percolating network inside the crystalline space of ZSM\u20105\u2010P\u20100.1\u20106 zeolites, hence the probe molecules experience a faster diffusion, lower tortuosity, within the inter\u2010crystalline space only. This is also confirmed by the comparable D and \u03c4 values of the zeolites under investigation when TIPB was used as the probing molecule in PFG\u2010NMR measurements (Table\u2005S13). In comparison with the state\u2010of\u2010the\u2010art post\u2010synthetic alkaline treatments , the method produces ZSM\u20105 nanoboxes with (i) a well\u2010developed percolating meso\u2010micro\u2010porous network and (ii) high concentration of strong acid sites in zeolite crystals.As previously mentioned, it is likely that the creation of an intra\u2010crystalline accessible meso\u2010micro\u2010porosity and low SAR of such mesoporous ZSM\u20105 led to (i) the significantly improved accessibility of guest molecules to the active sites (evidenced by PFG\u2010NMR measurements) and (ii) comparably high yet stable catalytic performance in cracking reactions, which is important for the development of specific propylene\u2010selective catalysts aiming to improve the current on\u2010purpose propylene production technologies.ZSM\u20105 zeolites are important catalysts for many catalytic conversions such as catalytic cracking (to increase propylene selectivity by cracking gasoline range molecules selectively), MTO, alkylation, and ethanol dehydration. Accessibility issues and mass transfer limitations in ZSM\u20105\u2019s microporous framework commonly affect the outcome of the catalytic reaction to a great extent. Post\u2010synthetic desilication treatment of ZSM\u20105 is the easiest way to introduce mesoporosity to ZSM\u20105 zeolites but is limited by SAR of the parent zeolites. This work presents a simple and novel strategy by rapid ageing of the sol gel mixture to prepare the parent zeolite with tetrahedral EFAL and low SAR of about 12, which can be subsequently reconstructed to give mesoporous hollow ZSM\u20105 nanoboxes with low SAR of \u224816. The developed protocol removed the limitation of SAR of the parent zeolite on properties of the post\u2010treated ZSM\u20105. The unique combination of the hierarchical The authors declare no conflict of interest.As a service to our authors and readers, this journal provides supporting information supplied by the authors. Such materials are peer reviewed and may be re\u2010organized for online delivery, but are not copy\u2010edited or typeset. Technical support issues arising from supporting information (other than missing files) should be addressed to the authors.SupplementaryClick here for additional data file."} +{"text": "The relationship among the social support, professional identity, and academic self-efficacy (ASE) of Chinese preservice special education teachers are explored by measuring the perceived social support, professional identity, and ASE of 302 undergraduate students. Results of the multiple regression are as follows. (1) A significant positive correlation exists among ASE, social support, and professional identity. When preservice special education teachers perceive high social support, they have a high sense of professional identity and high ASE. (2) Professional identity exerts a full mediation effect on the relationship between social support and ASE. In particular, social support positively influences ASE via professional identity. The results are discussed at the end of this paper and recommendations for improving the ASE of preservice special education teachers are presented. Teacher education has long played an intuitive and important role in special education. At present, the preservice education of special education teachers in China is carried out mainly by normal colleges specializing in special education teacher education and normal universities offering special education courses . CurrentAcademic self-efficacy (ASE) refers to a person\u2019s belief that he or she has the ability to complete the academic tasks prescribed by the school. It plays an important role among preservice special education teachers because it generally determines the learning motivation and academic achievement of students . At presAlthough the relationship among ASE, social support, and professional identity has been identified by several authors , to dateIn recent years, Chinese researchers have investigated the learning situation of preservice special education teachers and found that their ASE is only at a medium level , which may negatively affect the learning initiative of students . HoweverThe development of the ASE of preservice special education teachers may require social support . Social In addition to social support, professional identity may also affect the ASE of preservice teachers. Professional identity refers to an individual\u2019s identity and sense of belonging to a profession . ProfessStudies have shown that identity is related to and can predict ASE . For exaAlthough professional identity is important, its level is generally low in China . As mentIn summary, previous studies focused on the relationship between social support and ASE and the relationship between professional identity and ASE, but integrated research on the relationship between the three has been limited or even non-existent. Although social support is associated with ASE, far less is known about the mechanisms underlying this relationship. In particular, professional identity may play an important mediating role between the two; that is, social support may affect the professional identity of preservice teachers in special education and the promotion of professional identity will enhance academic self-efficacy. In addition, previous studies have focused mainly on Western European countries and the United States, but research in China, especially on preservice special education teachers, has been lacking. The training of preservice special education teachers in China puts more focus on theory than practice , whereasThis study aims to contribute to the understanding of ASE and related sociodemographic factors, such as social support and professional identity, among Chinese preservice special education teachers. Specifically, its objective is to investigate the following hypotheses:(1)H1a: Social support has a positive effect on professional identity.H1b: Social support positively affects ASE.H1c: Professional identity positively affects ASE.(2)H2: Professional identity mediates the relationship between social support and ASE.Chinese preservice special education teachers were selected from several universities, including South China Normal University, Lingnan Normal University, Guangdong Second Normal University, East China Normal University, and Southwest University, to answer questionnaires. This study used a convenient sampling method to distribute electronic questionnaires to these universities and then asked college students majoring in special education to fill in the questionnaires. A total of 322 questionnaires were collected and the recovery rate was 100%, of which 302 were valid and the effective rate was 93.7%. The participants included 58 males (19.2%) and 244 females (80.8%). The demographic characteristics of the participants are presented in Cross-sectional surveys were conducted in Mainland China, and the participants were selected from various universities. The participants were provided with a detailed description of the study and the intended use of the results. The participants were invited to complete a set of questionnaires, including a sociodemographic information questionnaire, the Social Support Questionnaire for Preservice Special Education Teachers , the ColThe Social Support Questionnaire for Preservice Special Education Teachers was developed by The College Students\u2019 Professional Identity Questionnaire was developed by Academic Self-Efficacy Scale was developed by p-value < 0.05 was considered statistically significant.Statistical analyses were conducted using SPSS 22.0. First, we generated means, standard deviations (SDs), and a correlation matrix to explore the associations among the variables. Second, we performed hierarchical multiple regressions to determine the respective contributions of sociodemographic variables, social support, and professional identity to ASE. The demographic variables, including the residence, gender, type of school , and year level of student teachers were entered in Block 1 of the regression analyses. These demographic variables were set as dummy variables. Social support and professional identity were standardized and entered in Block 2. Lastly, we used SPSS 22 with process analysis to detecThe means, SDs, and correlations of the study variables are presented in p < 0.01); that is, the higher the social support, the higher the professional identity, and vice versa. Furthermore, a significant positive correlation was observed between social support and the total score of ASE (p < 0.01); that is, the higher the social support, the higher the ASE, and vice versa. Lastly, a significant positive correlation was also found between professional identity and the total score of ASE (p < 0.01); that is, the higher the professional identity, the higher the ASE. Therefore, a relationship exists among the social support, professional identity, and ASE of preservice special education teachers.A significant positive correlation existed between social support and the total score of professional identity of the variance in ASE, whereas the gender and grade level of preservice special education teachers were statistically significant. The introduction of the social support and professional identity subscales in Step 2 accounted for 43.7% of the variance in ASE. Together, Steps 1 and 2 accounted for 47.4% of the variance in ASE, whereas grade and professional identity were statistically significant.Hierarchical multiple regression analyses were performed to examine the predictors of ASE. The results of the analyses are presented in b = 0.09, SE = 0.06, t = 1.64, p > 0.05) when the effect of professional identity was considered. This finding indicates that professional identity completely mediates the relationship between social support and ASE. The result of the z = 8.70, p < 0.001). As shown in b = 0.49, SE = 0.05, t = 10.25, p < 0.001); that is, if the participants perceived more support, then their ASE would improve. Therefore, H1b was supported. Furthermore, social support made a significant contribution to professional identity . When student teachers perceived more social support from others, they were more likely to report higher levels of professional identity in college. Then, the hypothesis H1a was confirmed. In turn, professional identity exerted a positive influence on ASE . Accordingly, the hypothesis H1c was also confirmed. We corroborated that professional identity was a full mediator that explains the path from social support to ASE. In addition, we use G\u2217Power 3.1 software to perform power analysis on the whole model, and the results show that Cohen\u2019s f2 = 0.89. According to We tested for mediation by regressing the predictor variable, social support on ASE, while including the proposed mediator . We first conducted these analyses by including the demographic variables identified earlier as control variables. Mediation analysis was conducted using 1000 bootstrap samples, which confirmed the significant direct effect of social support on ASE. As predicted, however, the result was rendered insignificant . Therefore, we determined that professional identity mediated the relationship between social support and ASE. This model, i.e., Model 4 is illusOur study intended to broaden the current literature on ASE and related sociodemographic factors, such as social support and professional identity, among Chinese preservice special education teachers. In this study, ASE among preservice special education teachers is at a moderate level, which is consistent with the findings of Many people feel special education teachers have lower social status and lower income than regular teachers, which reduces their learning enthusiasm , and as Similar to other studies, our analysis also showed a strong relationship between social support and professional identity see and betwOur regression analyses support the hypothesis that professional identity completely mediates the relationship between social support and ASE. This phenomenon shows social support cannot predict ASE when the three variables are put together. These student teachers often have high ASE only after they identify with special education majors. The reason why professional identity is very important is that many people in China have a negative view of special education teachers, believing that they are poorer in ability and lower in social status than regular education teachers . If studThe important mediating role of professional identity indicates that strengthening social support is insufficient and the effective educational strategies adopted by universities to improve the students\u2019 sense of identity and belonging to their chosen majors should also be considered. By considering both social support and professional identity, students and teachers can feel more encouraged and confident, become more involved, and consequently enhance their ASE. Special education is an integral part of the education system and because of the special needs of students with disabilities, special education teachers require higher expertise and skills than regular teachers . TherefoThree limitations of this study should be mentioned. The first limitation is the cross-sectional design of this study; the findings reflect associations, but not causal relationships among the variables . LongituThe ASE level of preservice special education teachers is not ideal. A significant positive correlation exists among the ASE, social support, and professional identity of undergraduates majoring in special education. Professional identity plays a mediating role between social support and ASE. Therefore, professional identity and perceived social support are crucial for improving an individual\u2019s sense of ASE. In particular, social support positively influences ASE via professional identity.zhongjingxun@sina.com).The datasets generated for this study will not be made publicly available as permission was not granted in the consent. Permission to access the data can be made by contacting the corresponding author (XC and JZ provided the idea, designed this study, and wrote the manuscript. MiL and MaL contributed to data analysis and data collection. MiL revised this manuscript.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Neuropsychiatric disturbances show high rates of comorbidity with cerebrocardiovascular disorders. In particular, depression and anxiety independently predict both cerebrovascular and cardiac events, which in turn represent a major cause of mortality in individuals suffering from psychiatric illnesses. On the other hand, depressive symptoms are extremely common in poststroke and vascular dementia.From a molecular point of view, oxidative stress (OS) could not only represent the major contributor for the pathogenesis and progression of psychiatric and vascular disorders but also explain the high rates of comorbidity between these disorders.This special issue is aimed at improving the current knowledge about the role of OS in the occurrence and progression of neuropsychiatric and cerebrocardiovascular disorders.Oryza sativa and Anethum graveolens in male Wistar rats with metabolic syndrome.Three papers focus on the role of OS in diabetes-related complications. In particular, P. Yang et al. reviewed current evidence on the role of the advanced glycosylation end products in the occurrence of OS-related cardiovascular complications in diabetes. N. Palachai et al. demonstrated the beneficial role of mulberry and ginger in reducing the metabolic alterations as well as the OS status and proinflammatory cytokines in male Wistar rats with metabolic syndrome. Finally, J. Wattanathorn et al. demonstrated the antioxidant and anti-inflammatory effect of \u03b3-lyase.Two articles explore the detrimental effect of stress on health. More specifically, O. Hahad et al. reviewed the damaging effect of environmental noise on mental and cerebrocardiovascular health in relation to OS, with special focus on the autonomic nervous system, endocrine signaling, and vascular dysfunction. Instead, D. C. Wigger et al. studied the link between early life stress and the risk of cardiovascular disorders evaluating the expression of the myocardial oxytocin receptor and the enzyme cystathionine Two reviews summarize current knowledge on the relationship between cerebrocardiovascular disorders and neuropsychiatric disturbances. In particular, D. Lin et al. described the common OS pathways and risk factors shared by ischemic cardiocerebrovascular disorders and depression. M. Luca and A. Luca focused on the role of OS-related endothelial damage in the pathogenesis of both vascular depression and cognitive impairment, also commenting on the beneficial effect of aerobic physical exercise on these disorders.Ganoderma lucidum triterpenoids, an inhibitor of the ROCK signaling pathway, in improving cognitive performance and reducing hippocampal cell apoptosis in Alzheimer's disease model mice.Several authors studied the role of OS from a multidisciplinary point of view. In detail, L. Venturini et al. performed a pilot study supporting the anti-inflammatory and antioxidant effects of probiotic administration on chronic fatigue syndrome/myalgic encephalomyelitis. M. Castaldo et al. demonstrated that SH-SY5Y cells, whose proliferation, migration, and neurite outgrowth are improved by formyl peptide receptor-1, can stimulate NOX-dependent superoxide generation. N. Yu et al. reported the therapeutic effect of We believe that these contributions provide an updated and comprehensive view on the role of OS in neuropsychiatric and cerebrocardiovascular disorders."} +{"text": "Expansion of the Sahara Desert (SD) and greening of the Arctic tundra-glacier region (ArcTG) have been hot subjects under extensive investigations. However, quantitative and comprehensive assessments of the landform changes in these regions are lacking. Here we use both observations and climate-ecosystem models to quantify/project changes in the extents and boundaries of the SD and ArcTG based on climate and vegetation indices. It is found that, based on observed climate indices, the SD expands 8% and the ArcTG shrinks 16% during 1950\u20132015, respectively. SD southern boundaries advance 100\u2009km southward, and ArcTG boundaries are displaced about 50\u2009km poleward in 1950\u20132015. The simulated trends based on climate and vegetation indices show consistent results with some differences probably due to missing anthropogenic forcing and two-way vegetation-climate feedback effect in simulations. The projected climate and vegetation indices show these trends will continue in 2015\u20132050. Expansion of the Sahara Desert (SD) and greening of the Arctic tundra-glacier region (ArcTG) have profound societal and economic consequences and affected the regional and global climate5. They have been hot subjects under extensive investigations10.Global climate change has extensively modified landforms and terrestrial ecosystems in many parts of the world during past decades12. The SD expansion has been used by the United Nations and countries/organizations as an indication for action and is a hot topic under debate10. The vegetation indicator, such as the normalized difference vegetation index (NDVI), has been used to identify the location of SD southern boundary7. It is reported that the interannual fluctuations of SD southern boundary based on NDVI similar to that based on isohyet definition in 1980\u2013199713. Thomas & Nigam2 used precipitation as an indicator to define SD boundary and reported that the SD expands 10% during the 20th century. NDVI is calculated as the ratio between reflectance of a red band (RED) and a near-infrared band (NIR), NDVI\u2009=\u2009(NIR-RED)/(NIR\u2009+\u2009RED). It is a measure of chlorophyll abundance and energy absorption. Therefore, NDVI is just a qualitative measurement of vegetation conditions. While, leaf area index (LAI) provides a plant property measurement for plant density and growth, LAI is more accurate in quantifying surface vegetation condition and landform change. Therefore, LAI is used to identify the SD boundary, which can more realistically distinguish bare-ground and vegetated area and better represents SD landform change. Furthermore, although precipitation dominates the dryland ecosystem, the warming-induced high potential evaporation has additional impacts on regional drying14. Heat stress, particularly after the 1980s, is found to harm the recovery of the Sahelian ecosystem15. Temperature is considered as another important indicator to assess dryland conditions16. The K\u00f6ppen-Trewartha climate (KTC) index, which is associated with both precipitation and temperature and their seasonality, provides a globally coherent metric to quantify the landform change17. This index also relates climate variables to surface land cover types when it was designed18. The distribution of the world\u2019s major ecosystems and the KTC zones has shown a high degree of correspondence18. In this study, we use both LAI and KTC index to define the SD boundary to investigate current and future SD areal extent and boundary changes.The severe West African drought and land-use changes there in the 1970s-1980s caused land degradation and desert expansion, and deteriorated the food and water security in Sahelian countries19, resulting in changes in tundra ecosystem23. Evidence from several circumarctic treeline sites shows a clear invasion of tree and shrub into previous tundra area9, suggesting a decrease in the area of ArcTG. The northward shift of treeline would decrease high-latitude albedo and provide positive feedback, further enhancing global warming24. National Academies of Science, Engineering, & Medicine5 have reported recent substantial vegetation condition changes (greening and browning) in the Arctic region, and the implication of such vegetation changes. The northern treeline and summer temperature are used to define the boundary of ArcTG25. In fact, the treeline is nearly coincident with isotherm definitions over most Arctic land areas26. The KTC index is able to track Arctic tundra area shrinking. In this study, treeline and KTC index are both used as vegetation and climate indicators, respectively, to define the boundary of ArcTG and investigate their changes.Another region, the Arctic, that is investigated in this study is warming faster than the global average (\u201cArctic amplification\u201d)27. The 1980s climate regime shift represented a major change in the Earth systems from the atmosphere, land to the ocean, which is identified by abrupt mean status shift and trend change in temperature, precipitation, sea surface pressure, terrestrial ecosystem conditions, and many other variables28. Therefore, we also assess the decadal variability in SD, in addition to identify one trend for the entire period as did in many other studies2.Global climate change has led to remarkable vegetation condition and landform change at the global scale. Simultaneous changes are taking place in many regions across the globe, especially Sahelian regions and the Arctic have received more attention. Thus far, published literature normally discussed the land condition changes in these two regions in separate articles, and most study use only precipitation for SD and temperature for ArcTG. For vegetation conditions, most studies focused on changes in NDVI and other vegetation indicesIn this study, we use satellite LAI and treeline products to derive observed vegetation index and gridded precipitation and temperature data to construct observed climate index. The National Centers for Environmental Prediction (NCEP) Climate Forecast System version-2 (CFSv2) coupled with the Simplified Simple Biosphere model version 2 (CFS/SSiB2), and coupled with a dynamic vegetation model (CFS/SSiB4), are used in this study. The dynamic vegetation model allows vegetation coverage, LAI, and relevant surface biophysical properties such as roughness length to interact with climate, while in CFS/SSiB2, these vegetation parameters are specified based on a vegetation table (see \u201cModels and outputs\u201d in Method for detail). The comparison between results from CFS/SSiB2 and CFS/SSiB4 allows to investigate the two-way vegetation-climate feedback in landform change.29, and has longer records. Therefore, the climate index is used to investigate long-term trend and decadal variability of the areal extent and boundary changes over SD and ArcTG in this study. The results from both climate index and vegetation index are cross-validated, and the possible causes for their difference are discussed. The areal extents derived from climate index will be denoted with a subscript of \u201cOBS-Clim\u201d for observation and \u201cCFS/SSiB2-Clim\u201d and \u201cCFS/SSiB4-Clim\u201d for CFS/SSiB2 and CFS/SSiB4 simulations, respectively. For the vegetation index, \u201cVeg\u201d will replace \u201cClim\u201d accordingly. The statistics for their areal extents and changes are summarized in Table\u00a0Vegetation index directly reveals geographic boundaries of SD and ArcTG and their changes. The satellite based the vegetation index only covers the period after the 1980s when satellite data becomes available. Climate index has shown consistent results with that of vegetation indexOBS-Clim covers about 9.5 \u00d7 106 km2 across North Africa during 1950\u20131984, and a shrinking of 12,000 km2/year (p\u2009<\u20090.01) in 1984\u20132015 .During 1950\u20132015, observed climate index shows that SD015 Fig.\u00a0. However015 Fig.\u00a0. The larCFS/SSiB2-Clim and SDCFS/SSiB4-Clim are well correlated with SDOBS-Clim and 8000 km2/year expansion from 1950 through 2015, accompanied by the expansion of southern boundaries by 70\u2009km contribution to the drought during the 1980s, which should cause land degradation12. This anthropogenic effect is missing in this CFS simulation, which may lead to underestimation of the SD expansion rate during 1950\u20131984. Moreover, consistently fewer changes in the CFS/SSiB2 simulation compared with that in CFS/SSiB4 in SD and following ArcTG demonstrate the importance of two-way vegetation-climate feedback in landform change. The CFS models reproduce up to 70% of the observed expansion trend during 1950\u20131984 without consideration of LULCC in models. Meanwhile, during the SD shrinking period, while no remarkable LULCC occurred, CFS models are able to reproduce the observed shrinking trend. Therefore, the climate factors dominate SD changes compared to other effects, such as LULCC.The simulated climate indices properly reproduce SD extent and its changes during 1950\u20132015 . An asymmetrical boundary shift is projected, with about 40\u2009km northward displacement in the western Sahel and 60\u2009km southward displacement in the eastern Sahel 4.5 scenario of the Intergovernmental Panel on Climate Change 5th Assessment Report (AR5), which only CFS is capable to conduct, the simulated climate indices show that with no LULCC the SD will further expand by about 6000 kmhel Fig.\u00a0. In the 30. We employ a range of 0.08\u20130.12 m2/m2 as the non-vegetation criterion to calculate the SD extent and its deviation with the assigned LAI range. The observed and simulated mean geographic SD extents (SDOBS-Veg and SDSSiB4-Veg) based on this range are 9.5 \u00d7 106 km2 and 9.6 \u00d7 106 km2, respectively, with boundaries nearly coincident with those based on their corresponding climate indices , close to the change based on SDOBS-Clim expansion during 1950\u20132015. During 2015\u20132050, the SDCFS/SSiB4-Veg has projected a 6900 \u00b1 600 km2/year (p\u2009=\u20090.14) expansion, close to that derived from climate index. In addition, the time series of SDCFS/SSiB4-Veg is also consistent with SDCFS/SSiB4-Clim with a correlation coefficient of 0.73 (p\u2009<\u20090.01) Fig.\u00a0 for the CFS/SSiB4-Veg expands 90\u2009km southward during 1950\u20132015 and will advance 40\u2009km further southward in the eastern Sahel during 2015\u20132050. In the western Sahel, no significant change is projected during 2015\u20132050, different from the projection based on climate index. The CFS/SSiB2 uses specified LAI. As such, no assessment can be made based on the vegetation index. With two definitions, we cross-evaluate the uncertainty in assessing/project SD expansion due to two different definitions and show they are generally consistent. Some discrepancies are likely due to errors in satellite-derived LAI and simulated climate and vegetation variables over the sparse vegetation area31.The southern boundary of SDet al., 2003; Swann et al., 2010; Schaefer et al., 2011; Pearson et al., 2013; Frost and Epstein, 2014), but reports on landform change at continental scale are lacking. The observed climate index shows that the average ArcTGOBS-Clim covers 5.7 \u00d7 106 km2 in 1950\u20132015 monotonically from 1950 through 2015 in response to global warming during 1950\u20132015, with boundary retreats by 50\u2009km\u2009in North America and 30\u2009km\u2009in Eurasia decrease in ArcTG extent, with 60\u2009km retreat in North America and 40\u2009km retreat in Eurasia by 2050 and damages the root system that would prohibit tree establishment. These factors are not considered in the ArcTGOBS-Clim and ArcTGCFS/SSiB4-Clim and produce lower area extent estimation with these two indices compared to vegetation indices. We cannot assess either the long-term average of ArcTGOBS-Veg extent or the advance rate using the CAVM treeline product since it is only for 2003. The treeline advance for the 20th century with various starting dates has been reported in a number of site measurements across the circumarctic forest-tundra ecotone36, indicating an Arctic shrinking in the past decades. The simulated ArcTGCFS/SSiB4-Veg covers 6.8 \u00d7 106 km2 for the period of 1950\u20132015, and covers 6.5 \u00d7 106 km2 for the year 2003. The simulated ArcTGCFS/SSiB4-Veg shrinking has consistency with the above-mentioned field measurements and shows a shrinking ArcTG during 1950\u20132015. The ArcTGCFS/SSiB4-Veg boundary retreat, however, shows a different asymmetry in the North American and Eurasian continents compared to that indicated by the climate index. Although the Eurasian treeline shifts 50\u2009km poleward, consistent with that of ArcTGCFS/SSiB4-Clim, but no significant change in the North American tree line is found for ArcTGCFS/SSiB4-Veg , respectively, and is projected to expand about 6600\u20136900 km2/year in 2015\u20132050, with southern boundary displace southward and vegetation index. In previous studies, only precipitation or NDVI was used to make an assessment in separate studies. We found that the area of SD expands 11,000 km2/year during 1950\u20132015 based on observed climate/simulated climate/simulated vegetation indices . The CFS simulation without dynamic vegetation substantially underestimates the shrinking rate, suggesting the two-way vegetation-climate interaction produces positive feedback and enhances the ArcTG shrinking. The discrepancies between the climate and vegetation indices reveal that the geographic changes are not only determined by the climate, but also affected by species-specific traits and local environmental conditions.The area of ArcTG reduces 14,000/10,000/13,000 kmThe land condition in these two regions have shown to have a substantial impact on climate, weather and ecosystems at continental and even, probably, global scales. We believe this article should stimulate more following scientific researches/debating on these subjects, which should provide useful information for economic and societal decisions with broad public interests.2/m2) in North Africa is defined as the geographic location of the Sahara Desert in this study. A range from 0.08\u20130.12\u00a0m2/m2 is used to assess the uncertainty of the threshold.The area with annual mean leaf area index (LAI) less than a threshold and one precipitation base group. The threshold . Global Historical Climatology Network/Climate Anomaly Monitoring System (GHCN_CAMS) gridded 2-m temperature over land at 0.5\u00b0\u2009\u00d7\u20090.5\u00b0 resolution with monthly interval was also obtained38. These datasets are applied to calculate the climate index for the period 1950\u20132015 for SD (SDOBS-Clim) and ArcTG (ArcTGOBS-Clim). In 1950, there are about 40 and 35 stations located around the southern and northern boundaries of SD, respectively, while about 186 stations reported observed 2-m temperature in the north of 60\u2009\u00b0N.Climatic Research Unit (CRU) time series (TS) provides gauge-based precipitation at 0.5\u00b0\u2009\u00d7\u20090.5\u00b0 horizontal-grid and monthly temporal resolution39. The GLASS LAI provides observations at 8-day temporal resolution and 1-km spatial resolution for the period from 1982 to 2017. It is used to calculate the observed vegetation index for SD (SDOBS-Veg).The Global Land Surface Satellite (GLASS) LAI was obtained to locate the non-vegetated area in North Africa. GLASS LAI was generated from AVHRR reflectance (1982\u20131999) and MODIS reflectance (2000\u20132012)25 to identify the geographic ArcTG (ArcTGOBS-Veg). This data set is only available for the year 2003.We also use the Circumpolar Arctic Vegetation Map (CAVM) treeline product40 coupled with the Simplified Simple Biosphere model version-2 (CFS/SSiB2)44, and CFSv2 coupled with a dynamic vegetation model (CFS/SSiB4)47, are used in this study. The dynamic vegetation model allows vegetation coverage, LAI, and relevant surface biophysical properties such as roughness length to interact with climate, while in CFS/SSiB2, these vegetation parameters are specified based on a land cover map48 and a vegetation table49. The CFS has an interactive ocean component, the Modular Ocean Model version-4 (MOM450), developed from the Geophysical Fluid Dynamics Laboratory (GFDL).The National Centers for Environmental Prediction (NCEP) Climate Forecast System version-2 (CFSv2)et al.43 and the land initial conditions for CFS/SSiB4 are obtained from Liu et al.15. We first integrate the offline SSiB4 hundreds years to reach an equilibrium conditions, then using observed meteorological forcing to drive SSiB4 to obtain the vegetation conditions from 1949 to 2007. The 1949 conditions in Liu et al.15 is used as the CFS/SSiB4 initial conditions for this study. The simulations use atmospheric CO2 concentrations from the World Meteorological Organization (WMO) Global Atmospheric Watch (http://ds.data.jma.go.jp/gmd/wdcgg/) for the past and from a medium RCP scenario (RCP4.5) for the future and are updated once a year. The simulated temperature and precipitation from CFS/SSiB2 and CFS/SSiB4 are used to construct climate index, and the LAI and vegetation fraction from CFS/SSiB4 are used to calculate vegetation index. No vegetation index can be constructed from CFS/SSiB2 run. The difference between those two simulations implies the role of two-way vegetation-climate feedback on landform change. Model outputs are corrected with bias correction.Two simulations are conducted using CFS/SSiB2 (without climate and ecosystem interaction) and CFS/SSiB4 (a dynamic vegetation process is included), respectively, integrated from 1949 through 2050, with T126 L64 spectral discretization . The ocean and atmospheric initial conditions are obtained from Lee et al.51 to minimize model systematic biases. The model-simulated variable (Mod\u2032) is decomposed into a climatological mean component (Mod\u2032):In addition to observational data, model-simulated temperature, precipitation, and LAI are also used to determine the extents of the study areas. We conducted bias correction at each grid point as did in Bruyere Obs\u2032)The observational data (Mod*) is written as:The bias-corrected simulated variable and the Arctic tundra-glacier (ArcTG) in this study. For convenience, we use the SD as an example in the following presentation. To do so, the total SD area for each year is obtained by taking an area sum after weighting each grid-cell area classified as SD multiplied by the cosine of its latitude. The SD time series is then used to investigate the temporal variability and calculate the linear trend of the areal extents. Since this method does not identify the location of the SD boundary, we use a modified endpoint method following Thomas & Nigamy\u2032, is corrected to preserve the original mean of variable y\u2032, respectively. Equations .6\\docum53.Since the time series show strong multi-decadal variations in some areas, such as the SD , a piecewise model is applied to detect the linear trend of a variable"} +{"text": "Ocean ecosystem models predict that warming and increased surface ocean stratification will trigger a series of ecosystem events, reducing the biological export of particulate carbon to the ocean interior. We present a nearly three-decade time series from the open ocean that documents a biological response to ocean warming and nutrient reductions wherein particulate carbon export is maintained, counter to expectations. Carbon export is maintained through a combination of phytoplankton community change to favor cyanobacteria with high cellular carbon-to-phosphorus ratios and enhanced shallow phosphorus recycling leading to increased nutrient use efficiency. These results suggest that surface ocean ecosystems may be more responsive and adapt more rapidly to changes in the hydrographic system than is currently envisioned in earth ecosystem models, with positive consequences for ocean carbon uptake. The ability of the ocean\u2019s biota to sequester carbon is thought to be negatively affected by climate change. Here the authors use time-series data in the Sargasso Sea to show that biotic processes can buffer against these negative impacts. Concerns about the impact of upper-ocean warming and increased stratification on phytoplankton production and carbon export have grown during the last decade7, with observations of synchronous increases in surface ocean temperature and an apparent global decrease of phytoplankton biomass, inferred from changes in chlorophyll a (Chl a) concentration4. Upper-ocean warming is thought to positively influence phytoplankton primary production by impacting their photosynthetic metabolism8, and has a negative effect by increasing stratification leading to reductions in vertical nutrient inputs and subsequent nutrient limitation9. Warming of the upper ocean is also predicted to reduce the magnitude of biological carbon export to the ocean interior by reducing upwelling of nutrients and/or by shifts in phytoplankton community composition toward smaller, less dense picophytoplankton11.Phytoplankton play a central role in regulating global ocean biogeochemical cycles and production in marine food webs13. Fixed stoichiometric models suggest that global net primary production (NPP) may decline by up to 20% (ref. 5), whereas flexible stoichiometric models indicate a\u2009<\u200910% reduction in NPP7. The Arctic Ocean and subtropical gyres appear the most sensitive global ocean regions in both models, with decreases in NPP exceeding the global mean from the models. These NPP reductions, in the models, appear to be driven by a greater decline in phytoplankton growth rate than in phytoplankton biomass stocks, supporting the general view of bottom-up controls on NPP. As with NPP, flexible stoichiometric models predict smaller decreases in the magnitude of the biological carbon pump due to modeled increases in nutrient use efficiency associated with the flexible stoichiometry of phytoplankton14. In terms of reducing the impact of nutrient limitation on carbon export, the benefits of flexible stoichiometry are partially negated by the reduction in mean phytoplankton cell size, which, in the models, also limits the export of carbon associated with smaller phytoplankton. Resolution of these competing processes impacting phytoplankton-mediated processes based upon in situ data is lacking. For example, in the Sargasso Sea, the resident phytoplankton community has a highly flexible macronutrient stoichiometry16. The cyanobacteria genera Synechococcus and Prochlorococcus contribute substantially to particulate carbon export17, resulting in an overall efficient carbon export system18.The magnitude of this predicted series of changes in marine biogeochemistry is in part dependent upon the stoichiometric relationship between the major macronutrients carbon (C), nitrogen (N), and phosphorus (P), with the canonical Redfield ratio set at 106\u2009C:16\u2009N:1\u2009P19. Here, we continue to analyze long-term patterns in the coupling of NPP, carbon export, phytoplankton composition, and stoichiometry. We ask the following questions: (1) Does surface ocean warming covary inversely with nutrient inventories? (2) Do planktonic communities display a stable or variable taxonomic and elemental composition? and, (3) How do these interactions impact observed carbon export?In the Sargasso Sea at the Bermuda Atlantic Time-series Study (BATS) site, we have quantified relationships between phytoplankton production, nutrients, ocean physics, and carbon export for three decades. Recent studies have confirmed an inverse relationship between temperature and NPP20. The 1990s were characterized by a weak but significant increase in temperature at 10\u2009m , whereas the following decade (2000s) did not exhibit a significant temperature trend . The increase in near-surface temperature during the 2010s was fourfold greater than the 1990s and accounted for the majority of the total temperature increase observed from 1990 to 2020 with a period of weak to no warming (1990\u20132000s).Surface ocean temperatures significantly increased by ~0.9\u2009\u00b0C over the entire duration of the BATS record analyzed here (1990\u20132020); however, the increase has not been uniform over time020 Fig.\u00a0. This pa21, which, ultimately, negatively impacts subsequent vertical nutrient inputs. During the 2010s, but not prior decades, the difference between winter maximum and summer minimum mixed layer depths significantly decreased due primarily to shallowing of the winter maximum. This pattern is consistent with the temporal pattern in the magnitude of the wintertime North Atlantic Oscillation (NAO) index and phosphate . It is noteworthy that the anomalous 2010 wintertime NAO index25 resulted in a significant increase in nutrient inventories from which this decade-scale decline began. Throughout the 2010s, the dissolved organic phosphorus (DOP) inventory also decreased significantly .The increase in temperature and decrease in the amplitude of seasonal mixing through the 2010s was correlated with a significant net decline in inventories (0\u2013140\u2009m) of nitrate Fig.\u00a0. By the 27. However, at BATS, counter to predictions, carbon export fluxes at 150\u2009m did not significantly decrease throughout the 2010s increased from 0.04 in 2010 to 0.16 in 2019, indicating that the ecosystem became more efficient at exporting carbon from the euphotic zone, despite the decade-long decrease in nutrient inventories and NPP.While global biogeochemical models predict this coordinated series of processes\u2014ocean warming, increasing stratification, reduction in nutrient inputs, and subsequent decrease in NPP\u2013they also predict a decrease in carbon export, thus reducing the magnitude of the biological carbon pump85) Fig.\u00a0. While iProchlorococcus and Synechococcus biomass . We were unable to detect changes in the biomass of larger, less abundant microphytoplankton, as the summed prokaryote and small eukaryote carbon biomass estimates by FCM were only marginally greater than the independently estimated total phytoplankton carbon from each other. Thus, the ~40% decline in NPP mentioned previously appears due primarily to the reduction in phytoplankton biomass during the 2010s, not a change in physiological condition. This change in the relative abundance of phytoplankton populations, and subsequent reduction in effective mean cell size of the phytoplankton population with the decrease of larger eukaryotes, only further confounds the decoupling of nutrients, NPP, and carbon export fluxes. Thus, explanations for the decoupling of nutrients, NPP, and carbon export likely involve changes in both phytoplankton processes and ecosystem processes .The decline in NPP throughout the 2010s was concurrent with a reorganization of the phytoplankton community Fig.\u00a0. Throughass Fig.\u00a0. The dec04) Fig.\u00a0. In addins) Fig.\u00a0. Further29, and is generally viewed as a phosphorus-limited system32. As such, changes in C:P stoichiometry are commonly studied in this region. Flexible macronutrient stoichiometry, associated with changes in phytoplankton community composition, has been hypothesized to buffer reductions in carbon export under conditions of increased stratification and nutrient depletion14. Based upon Tanioka and Matsumoto\u2019s sensitivity ratio14\u00a0and the observed decrease in phosphate inventory of ~25\u201330%, we would expect a reduction in carbon export of ~18\u201322%. Throughout the 2010s, when nutrient inventories and NPP were decreasing, there were significant and concurrent changes in the macronutrient stoichiometric ratios of exported material and N:P ratios . This trend in increasing C:P and N:P ratios led to a nearly fourfold increase in the C:P (205\u2009\u00b1\u2009108 versus 722\u2009\u00b1\u2009226) and N:P (29\u2009\u00b1\u200914 versus 101\u2009\u00b1\u200935) stoichiometric ratios of exported material by the end of the 2010s when compared to the pre-2010 period . A nutrient inversion model yielded a similar C:P ratio of exported material (355\u2009\u00b1\u200965) in the subtropical North Atlantic Ocean33 but did not resolve temporal changes in the ratio. As both carbon and nitrogen export flux rates did not decrease, the C:N stoichiometric ratio in exported material at 150\u2009m depth did not significantly change between the period before and after 2010 and averaged 7.23\u2009\u00b1\u20091.61 , and/or due to enhanced shallow remineralization of P between the base of the euphotic zone (100\u2009m) and the shallow sediment traps (150\u2009m), resulting in nutrient trapping35. This nutrient trapping effect is not 100% efficient, as the integrated nutrient inventories do continue to decline. However, we hypothesize that this effect slows down the rate of nutrient inventory decline and thus serves as another \u201cbiological buffer\u201d to the negative impacts of upper-ocean stratification associated with climate change.The Sargasso Sea is one of several western ocean gyre regions characterized by extremely low phosphate concentrationsial Fig.\u00a0, without.61 Fig.\u00a0. This teR2\u2009=\u20090.17, P\u2009<\u20090.001, slope\u2009=\u200914.5\u2009\u00b1\u20092.9 units year\u22121) throughout the 2010s and reached values higher than in the 2000s. The N:P ratio, constrained by both elements potentially being limited, also increased significantly . Stoichiometric ratios in the lower euphotic zone displayed temporal patterns qualitatively similar to the same ratios in exported flux ratios , which due to high variance, is statistically similar to the increase in the seston C:P ratio . The change in modeled phytoplankton C:P is based upon using Synechococcus to represent the phytoplankton community, due to lack of validation datasets for other phytoplankton taxa, but this should also qualitatively reflect the change in the C:P Prochlorococcus39, which combined, constitute the largest portion of phytoplankton in the Sargasso Sea40. This increase in modeled phytoplankton C:P, for a taxonomically \u201cconstant\u201d phytoplankton assemblage, is hypothesized to be due to slight decreases in growth rates that have a disproportionate impact on C:P ratios38. Neither the observational estimates of phytoplankton growth rate based upon carbon nor the modeled growth rate than ratios at the 150-m sediment trap, suggesting enhanced shallow remineralization of both P and N between 100 and 150\u2009m42. Most clearly, for the coupling of C:P ratios, it appears that both changes in the ratio of source material and enhanced remineralization of P are important. Based on the change over time in seston C:P ratios relative to changes over time in C:P ratios of exported material, we estimate ~20% of the change in the ratio of exported material is due to this change in the source material. By difference, ~80% of the change is due to enhanced shallow demineralization, which has been hypothesized as an important mechanism to sustain carbon export in oligotrophic regions43. Data for bacterial productivity between 100 and 150\u2009m is scant during the decade of the 2010s, thus we cannot assess if the apparent increase in shallow remineralization is associated with changes in bacterial productivity, zooplankton \u201crepackaging\u201d44, and/or other unidentified processes.While the temporal patterns of stoichiometric ratios in the euphotic zone mirrored those of exported particulate matter, C:P ratios in the euphotic zone were consistently and significantly lower . While the time series post-2011 has not been analyzed specifically for salps, we examined a metric that could reflect changes in salp abundance, zooplankton dry to wet weight ratio , however, no significant trends were observed. We also examined size-fractioned zooplankton biomass data to determine more broadly if there have been changes in community structure. Both day- and nighttime absolute zooplankton biomass of the largest (>5\u2009mm) size class increased significantly in the period before 2010, and significantly decreased in the 2010s decade BATS cruise, a quasi-lagrangian sampling scheme is employed, and for this manuscript, all CTD/hydro casts falling within a 0.25\u00b0 latitude by 0.25\u00b0 longitude box centered around the BATS site are presented. Further details of the sampling scheme, analytical methods, data quality control (QC) and quality assurance (QA), and history of sampling procedures are available in the BATS methods manuals60 and in published papers64. Detailed methods are provided below.The history of the Sargasso Sea ocean time-series research and basic understanding of the physical and biological characteristics of this region are described in detail in prior reviews24). CTD profile data were used to estimate mixed layer depths using a 0.2\u2009\u00b0C variable sigma-\u03b8 criterion65. Near-surface (10\u2009m) temperature anomalies were generated by seasonally detrending the entire data record.Continuous CTD data were collected from the downcast and calibrated against discrete samples soluble reactive phosphorus (SRP) method68, as modified for BATS31. During each sample run, commercially available certified standards, OSIL and Wako Chemical, are analyzed to maintain consistent data quality, as well as \u201cstandard water\u201d from 3000\u2009m that serves as an internal standard.Samples for nitrate and phosphate were filtered (0.8-\u00b5m polycarbonate filter) into acid-washed HDPE bottles, and frozen (\u221220\u2009\u00b0C) until analysis using standard air-segmented autoanalyzer methods59. Particulate phosphorus samples (PP) were analyzed using an ash-hydrolysis method with oxidation efficiency and standard recovery checks31.Particulate organic carbon (POC), nitrogen (PON), and phosphorus (PP) samples were filtered on precombusted Whatman GF/F filters and frozen until analysis on a Control Equipment 440-XA elemental analyzer69. Samples were preserved and flash-frozen in liquid nitrogen before being stored at \u221280\u2009\u00b0C until analysis by flow cytometry. Small cyanobacteria were identified as either Synechococcus or Prochlorococcus based upon cell size and the presence or absence of phycoerythrin, respectively. Eukaryotes were defined as other chlorophyll-containing cells not being cyanobacteria. They were separated into picoeukaryotes (<3\u2009\u00b5m) and nanoeukaryotes (>3\u2009\u00b5m) based upon their forward scatter signal relative to 3-\u00b5m polystyrene beads. Phytoplankton cell abundance was converted to carbon per cell using a normalized cell size-carbon relationship and then to population biomass by multiplying by cell abundance (40).Samples for pico- and nanoplankton enumeration were collected on each cruise from June 2002 to the presenta) to obtain a single slope for the dataset that relates the two parameters. This approach was used rather than taking the average ratio of discrete POC and Chl-a integrals as it allows for the exclusion of \u201cnon-phytoplankton\u201d POC from the relationship between phytoplankton POC and Chl-a. The slope of the POC:Chl-a regression was then multiplied by integrated Chl-a to estimate phytoplankton POC. This estimate of total phytoplankton carbon was compared to the sum of flow cytometry-derived picoplankton carbon, with any differences indicative of larger microplankton not well measured by flow cytometry.Total phytoplankton carbon was estimated independently by regressing integrated (0\u2013140\u2009m) total POC against total chlorophyll using an assumed ratio of total inorganic carbon present to radiocarbon added. From 1990 through 2005, samples were collected with Go-Flo bottles on a Kevlar line, and from November 2004 to the present, samples were collected from Niskin bottles on the CTD rosette. Rates of primary production were corrected for dark carbon uptake and integrated to a depth of 140\u2009m23.In situ production estimates were determined using surface-tethered arrays with samples spaced every 20\u2009m between the surface and 140\u2009m. Rates of primary production were calculated from the incorporation of H71, which facilitates the accurate computation of phytoplankton C:N:P under a variety of environmental conditions. Here, the original trait-based based model was modified following Tanioka et al.38 to predict phytoplankton C:P as a function of satellite-based growth rate, Chl:C, and P limitation. We used the model parameter set for the freshwater cyanobacteria \ufeffSynechococcus linearis except for a hard-bound maximum C:P of 335 at the zero growth rate, following experimental chemostat data on the marine cyanobacteria Synechococcus WH810272. With this slight modification to the model parameters, the model C:P matches the observed C:P from the culture experiment when the growth rate is less than 0.8\u2009d\u22121 are based on the estimates from the Carbon-based Productivity Model (CbPM)73. Chl:Cphyto ratio, a proxy for light limitation, is computed by dividing MODIS-derived Chl-a with Cphyto. Both growth rate and Chl:Cphyto are assumed to be vertically uniform in the mixed layer. P limitation is inferred by comparing MODIS-derived monthly mean SST and empirically derived phosphate depletion temperature, a temperature above which phosphate is no longer detectable74.We report here a monthly and area-averaged C:P for a 3-by-3 pixel around BATS predicted using the trait-based model forced with satellite input derived from MODIS-aqua. Satellite-derived monthly growth rate and phytoplankton carbon (CSupplementary Information"} +{"text": "Tamarix ramosissima has been widely used as barbecue skewers for the good taste and unique flavor it gives to the meat, but the effects of T. ramosissima on heterocyclic amine (HA) formation in roast lamb are unknown. The influence of T. ramosissima extract (TRE) on HA formation, precursors\u2019 consumption, and free radicals\u2019 generation in roast lamb patties were elucidated by UPLC-MS, HPLC, and electron spin resonance (ESR) analysis, respectively. Six HAs were identified and compared with the control group; the total and polar HAs decreased by 30.51% and 56.92% with TRE addition at 0.30 g/kg. The highest inhibitory effect was found against 2-amino-1-methyl-6-phenylimidazopyridine (PhIP) formation (70.83%) at 0.45 g/kg. The addition of TRE retarded the consumption of HA precursors, resulting in fewer HAs formed. The typical signal intensity of free radicals in roast lamb patties significantly decreased with TRE addition versus the control group (p < 0.05), and the higher the levels of the TRE, the greater the decrease in signal intensity. We propose that the inhibitory effects of TRE on HA formation, especially on polar HAs, were probably achieved by retarding the consumption of precursors and preventing free radicals from being generated in roast lamb patties. These findings provide valuable information concerning TRE\u2019s effectiveness in preventing HA formation through both the precursor consumption and free radical scavenging mechanisms. Tamarix ramosissima bark is widely used as barbecue skewers and has a long history of such use; it gives the meat a good taste and flavor unique to southern Xinjiang. However, one of the problems encountered with this delicacy is that the flow of dry air in the roasting oven causes the evaporation of surface water, which means harmful compounds can form, such as heterocyclic amines (HAs) quinoline), MeIQ , MeIQx , 4,8-DiMeIQx , 7,8-DiMeIQx , PhIP , Harman , Norharman , Trp-P-2 , Trp-P-1 , A\u03b1C , and MeA\u03b1C \u2014were supplied by Toronto Research Chemicals . The standards of free amino acids, creatine, and creatinine were purchased from Sigma-Aldrich . Oasis MCX cartridges were supplied by Waters . The TRE was obtained followed our previous described [All of the chemicals and solvents were of HPLC or analytical grade. Analytical standards of 12 Has\u2014IQ , and the results were expressed as mg per 100 g samples.The contents of creatine or creatinine in the meat samples were determined using a method described by Haskaraca et al. , and resGlucose in the roast meat samples was determined with a glucose (GO) assay kit, product number GAGO20-1KT by Sigma-Aldrich, Shanghai, China.ESR measurements were performed to evaluate the mechanisms of HA formation and reduction in roast lamb. ESR spectra were obtained using an A300-10 ESR spectrophotometer at room temperature. First, 0.60 g of meat samples were put into a cylindrical ESR tube (ER221/TUB4 Bruker quartz tube with a diameter of 0.5 cm) for measurement. The ESR settings were slightly modified from those described previously : center g for 10 min at 4 \u00b0C, and the supernatant was collected. Then, 30 mL of the ethyl acetate layer of the supernatant was transferred into Waters Oasis MCX cartridges activated by 6 mL of methanol, 6 mL of distilled water, and 6 mL of 0.1 mol/L HCl. Afterwards, the cartridges were sequentially rinsed with 6 mL of 0.1 mol/L HCl and 6 mL of methanol. The retained HAs were eluted with 6 mL of methanol\u2013ammonia mixture . All of the analytes were concentrated under nitrogen flushing and dissolved in 250 \u03bcL methanol and filtrated through a 0.22 \u03bcm syringe filter just before UPLC-MS/MS analysis.Twelve polar and non-polar HAs were extracted by solid-phase extraction according to Zeng et al. with fewThe HAs in the lamb patties were identified and quantified on an Acquity UPLC BEH C18 column at 35 \u00b0C. The gradient elution was achieved with a binary mobile phase of 10 mmol/L ammonium acetate (pH 6.8) (A) and acetonitrile (B). The solvent composition was 0\u20130.1 min, 90% A; 0.1\u201318 min, 10\u201330% B; 18\u201320 min, 30\u2013100% B; 20\u201320.1 min, 100\u201310% B; and the total flow rate was 0.3 \u03bcL/min. The injection volume was 2 \u03bcL. The mass spectrometric conditions were as follows: positive ion mode; capillary voltage, 3.5 kV; ion source temperature, 120 \u00b0C; desolvation temperature, 400 \u00b0C. Data acquisition and processing were performed using Masslynx 4.1 . The HAs were quantified with calibration curves of each kind of HA at eight calibrant levels ranging from 0.2 to 30 ng/mL. According to Zeng et al., the determinations of LOD and LOQ were according to the signal-to-noise ratio (S/N) method, and the recovery rates for each HAs in the samples were determined by the standard addition method [p < 0.05 was selected as the level for significant differences.Statistical analyses were conducted using the IBM SPSS Statistics program ver. 22 . An ANOVA and Duncan\u2019s test were used to assess the differences between different treatments. Pearson\u2019s correlation was performed to evaluate relationships among different groups. Experiments were conducted in triplicate and data are expressed as mean \u00b1 standard deviation (SD). We followed the methods for the extraction and quantification of HAs used in the study by Zeng et al. , which hp < 0.05). Compared with the control group, a significant decrease in IQ was observed with the inhibition of 63.44%, 59.68%, and 45.70% at the TRE levels 0.15, 0.30, and 0.45 g/kg (p < 0.05). There was no significant difference between the TRE levels 0.15 and 0.30 g/kg (p > 0.05), and both of the two lower TRE levels (0.15 and 0.30 g/kg) were significantly lower than the higher TRE level 0.45 g/kg (p < 0.05). A similar result was obtained by Guo et al. [As far as we know, no other study has evaluated the effects of TRE on HA formation in lamb during cooking. Different influences were examined of TRE on the total HA amount and individual HA levels. The contents of the six detected HAs in grilled lamb patties, expressed as ng/g dry matter, are shown in o et al. , where Io et al. observedo et al. .p < 0.05) with the TRE concentration increased from 0.15 to 0.45 g/kg (p < 0.05), indicating that TRE had an excellent reducing effect on PhIP formation. Khan et al. found a 56% reducing effect of 0.2% Chrysanthemum morifolium flower extract on PhIP in roasting goat patties [As the second most abundant polar HA in the present study, PhIP is well-known as the most abundant HA formed in beef under normal cooking conditions . In the .45 g/kg . Signifi patties . Further patties . However patties .p < 0.05), respectively, after the addition of TRE at three levels compared to the control samples. It should be noted that when the addition of TRE was 0.45 g/kg, the concentrations of MeIQ were near the LOQ of the method, which was significantly lower than the other three groups (p < 0.05). Tengilimoglumetin et al. found that at 250 \u00b0C, hawthorn extract at the 0.5 and 1% levels decreased the formation of MeIQ by 34.33% and 44.78%, respectively [Similar inhibition was also observed in MeIQ, which was found to range from 0.10 to 0.26 ng/g in lamb samples. The concentrations of MeIQ decreased by 46.15%, 50.00%, and 61.54% (ectively . When frectively .p < 0.05), respectively. Similar results were also observed in previous research. Ahn et al. found that there were promotive or reductive effects on \u03b2-carbolines\u2019 formation with the addition of grape seed extract, pine bark extract, and Oleoresin rosemary [Chrysanthemum morifolium flower extract inhibited the formation of Harman and Norharman by 32% and 39%, respectively, while no significant difference was found compared with the control samples [Conversely, with the addition of the TRE, the co-mutagenic \u03b2-carbolines Harman and Norharman, which belong to the non-polar HAs, exhibited different behaviors to other HAs tested in the present study. Both were detected in all samples, and the maximum contents were 2.42 and 3.51 ng/g in the TRE group at 0.45 g/kg, respectively, indicating that the addition of TRE had a promoting effect on \u03b2-carbolines, especially on Harman. At the highest amount of TRE used in grilled lamb (0.45 g/kg), the concentrations of Harman and Norharman increased by 178.16% and 9.01% compared to the control group , respectively. In research performed by Guo et al., 0.92 ng/g MeA\u03b1C was detected in lamb patties roasted at 200 \u00b0C for 25 min [As another non-polar HA, the \u03b1-carboline MeA\u03b1C, with a range of 0.69\u20131.34 ng/g, was different from Harman and Norharman. As the TRE levels increased, the production of MeA\u03b1C decreased by 8.21%, 43.28%, and 48.51% , while the creatinine concentration increased under the roasting conditions. The reduction rate for all free amino acids between the raw lamb and the roasting control group ranged from 31.96% (Gly) to 65.25% (Phe). Similarly, Gibis et al. reported that all free amino acids decreased by approximately 50% after frying [The concentrations of different precursors\u2014namely, free amino acids, glucose, creatine, and creatinine\u2014are shown before and after roasting with different TRE levels in r frying . It has r frying ,24,41. Tr frying ,41.p < 0.05, p < 0.05, The addition of different TRE levels retarded the consumption of most free amino acids, especially for Asp, Thr, Ser, Ala, Phe, Lys, His, and Arg. Compared to the control group, the total amino acids increased by 14.08%, 7.49%, and 11.21% with the addition of TRE at 0.15, 0.30, and 0.45 g/kg, respectively, and also the contents of Asp, Thr, Ser, Phe, Lys, His, and Arg significantly increased at 0.30 and 0.45 g/kg TRE . Similar to the results in the present study, decreased glucose levels after cooking have been reported by several researchers [p < 0.05), which may be due to fewer HAs formed in the TRE group. Tengilimoglumetin et al. also observed that when artichoke extract was added to chicken breast meat samples at the 0.5 and 1.0% levels, the glucose concentrations increased significantly compared to the control group, whereas in the beef samples, similar contents were found between the artichoke extract treatment group and the control group [p < 0.01), and \u22120.67 (p < 0.05), respectively (2 = 0.88 (p < 0.01). However, Norharman was not significantly associated with the glucose level . The results of the present study are in accordance with those of Gibis and Weiss, who found the same correlations between glucose concentrations and the co-mutagens Harman and Norharman [The glucose concentration of raw lamb samples (1.24 mg/g) was similar to that of Gibis and Weiss, who determined the glucose level in raw lamb to be 1.82 mg/g . The gluearchers ,31,32,47earchers ,31. Simuol group . The corectively . Harman orharman . Meanwhiorharman .p < 0.05), whereas no significant difference was found among the groups with different TRE levels (p > 0.05). The present results are in accordance with those of Haskaraca et al., who stated that the addition of green tea extract had different effects on the creatine and creatinine in chicken compared with the control [Creatine or creatinine in raw meat plays a key role in the mutagenic activity of HAs in cooked meats ,18. As cp < 0.05), and the higher the levels of TRE, the more the signal intensity decreased. Compared to the control group, the peak height decreased by 8.67%, 17.31%, and 18.97% with the addition of TRE at 0.15, 0.30, and 0.45 g/kg, respectively. The correlation between free radicals and all the polar HAs in the present study was strongly significant, and the coefficients were 0.90, 0.69, and 0.89 between free radicals and PhIP, IQ, and MeIQ . In other words, both the amounts of HA and ESR signal intensity decreased with the addition of TRE.ESR spectroscopy is a method based on measuring the transitions of unpaired electrons in a magnetic field, and it is uniquely able to detect free radical species directly and specifically ,51. The In our previous study, TRE exhibited excellent free radical scavenging ability in vitro, and the phenolic compounds in TRE, such as isorhamnetin, cirsimaritin, quercetin, and kaempferol, were all proven to be good free radical scavenging agents . The remHowever, several researchers hold the opposite opinions that there is no positive correlation between the inhibitory activity of phenolic compounds on HA formation and free radical scavenging activities, and they consider that the mechanism of phenolic compounds\u2019 inhibition of HAs\u2019 formation is more complex than just free radical scavenging ,57. In rT. ramosissima in barbecue skewers to improve their flavor and inhibit the formation of HAs during processing.Generally, the results of the present study have shown that the addition of TRE can effectively inhibit the formation of HAs, especially polar HAs, in lamb patties roasted at 200 \u00b0C. Most of the precursors\u2019 consumption was retarded, and the typical signal intensity of free radicals was remarkably decreased with the addition of TRE. The possible mechanism for the inhibition of HAs of TRE aligns well with the previously proposed mechanism involving unstable free radical Maillard intermediates\u2019 reactions, and the phenolic antioxidants in TRE acted as strong scavengers of free radical species. These findings may provide a basis for the wider use of"} +{"text": "Attentional bias for substance-relevant cues has been found to contribute to the persistence of addiction. Attentional bias modification (ABM) interventions might, therefore, increase positive treatment outcome and reduce relapse rates. The current study investigated the effectiveness of a newly developed home-delivered, multi-session, internet-based ABM intervention, the Bouncing Image Training Task (BITT), as an add-on to treatment as usual (TAU).N = 169), diagnosed with alcohol or cannabis use disorder, were randomly assigned to one of two conditions: the experimental ABM group ; or the control group . Participants completed baseline, post-test, and 6 and 12 months follow-up measures of substance use and craving allowing to assess long-term treatment success and relapse rates. In addition, attentional bias (both engagement and disengagement), as well as secondary physical and psychological complaints were assessed.Participants , or may relate to the diverse treatment goals of the current sample . The current findings provide no support for the efficacy of this ABM approach as an add-on to TAU in alcohol or cannabis use disorder. Future studies need to delineate the role of engagement and disengagement bias in the persistence of addiction, and the role of treatment goal in the effectiveness of ABM interventions. The persistent nature of substance use disorders is well known among researchers and professionals working with addicted patients , 2. On tTo the extent that AB plays a role in the persistence of substance use behavior, it seems relevant to develop interventions to address AB. Importantly, recent studies found that AB is largely unaffected by current treatments that target deliberate (\u201creflective\u201d) processes involved in decision-making and behavioral control , 13. AnoOne potentially important factor that may limit the efficacy of ABM trainings concerns the methodology of typical ABM training procedures, which typically involve the presentation of just two static stimuli. For example, visual probe AB training, which is based on the visual probe task , requireA second potential factor contributing to the inconsistent findings of ABM trainings concerns variation in the number of training sessions. That is, several studies have shown that a single session of ABM training can modify AB , 22. HowA third potential reason for inconsistency has been suggested based on results in anxiety-related ABM studies . This coA fourth potential factor that may contribute to variability of findings concerns participant motivation to change substance use behavior. That is, changes of behavior may be less likely to be observed when individuals are not motivated to change , 28. It Recognition that these various factors could collectively contribute to observed inconsistency in past findings suggests ways of potentially improving ABM training approaches, with the aim of increasing the clinically relevant effects of these interventions on substance use disorder symptoms. The current study was therefore designed to test a novel training that addresses the factors mentioned above, including (a) a more complex task configuration to more closely mimic real-life complexity, (b) multiple training sessions, (c) the delivery within the home environment, and (d) the inclusion of a treatment-seeking clinical sample.Although this novel training addresses some of the issues raised regarding to the above-mentioned factors, there is an important challenge to multi-session trainings and the delivery of interventions at home, namely motivation. There are indications from the field of anxiety research that compliance can be limited when participants are required to complete multi-session trainings, at least in non-clinical samples . One poThus far promising positive effects of cognitive bias modification approaches, and of ABM in particular, have been found when bias modification trainings has been delivered as part of patients\u2019 inpatient treatment \u201336. TherIn summary, the current study was designed as a multi-center randomized controlled two-armed trial to investigate the efficacy of a novel ABM intervention in reducing clinically-relevant symptoms of substance use behavior. The ABM intervention was provided as a home-delivered multi-session training to alcohol and cannabis dependent outpatients as an add-on to TAU.The present study was a multicenter randomized controlled two-armed, parallel-designed trial with one treatment arm (ABM intervention) and a control arm see . The desn = 1; [n = 2). Participants were treatment-seeking adult patients diagnosed with alcohol use disorder or cannabis use disorder , based on the diagnostic criteria of the Diagnostic and Statistical Manual of Mental Disorders Fourth/Fifth Edition [SD = 13.96; age range 18\u201378). Participants received TAU in line with current treatment policy and guidelines in Dutch addiction care, which consisted of 350 to 750 minutes of protocolized outpatient CBT-based intervention. The treatment goal of participants was either moderation or abstinence, depending on their own capacity and wishes, and the recommendation of the therapists. Participants experienced no or only limited secondary problems, such as financial or relational problems. See Patients were eligible for the study if they (a) were 18 years or older, (b) had a primary diagnosis of AUD or CUD, and (c) had an indication for and received TAU as described above. Patients were not eligible if they had a problem with gaming, gambling disorder, or internet addiction as measured with a short version of the C-VAT 2.0 , and/or Edition , with a Eligible patients of the involved treatment centers received information about the study from their therapist . Patients provided written permission for their contact information to be passed to the researcher who contacted the patients by phone to screen for eligibility, to explain the study, and to answer questions.Approval for the current study was given by the ethical committee of the University Medical Centre of Groningen , and the study was registered at the Netherlands Trial Register (NTR5497). Data collection took place between April 2016 and June 2019. This period was longer than originally planned and indicated in the pre-registration, which is due to slower inclusion of participants. This extension was approved by the grant agency .ABM); control group . After randomization, participants received an automated e-mail in which they were invited to start the baseline assessment. In order to prevent that potentially early effects of TAU would affect the baseline assessment, participants were requested to finish the baseline assessment before the fourth session of TAU. Patients who did not meet these requirements were excluded from further participation (n = 27). After baseline assessment, participants who were assigned to the ABM condition or to the placebo subgroup read the training instructions, and watched a short instruction video, followed by a five minutes practice session. For the subsequent three weeks, participants of the ABM condition and the placebo subgroup were invited to complete a training session on a daily basis. After this period, participants were invited to train for another three weeks three times a week, and thereafter once a week for the remaining time of TAU . If a participants did not respond to the invitations and the automatic reminders, a researcher contacted them by phone to remind them to fill in the online measurements.Attentional bias modification training. AB for alcohol/cannabis cues was trained away with the Bouncing Image Training Task (BITT), based on the Emotion-in-Motion training [training . In thisEach training session was divided into four games of 2.5 minutes , and the training consisted of 12 levels, gradually increasing in difficulty. All participants started with level one and could unlock more challenging levels by reaching 80 points or more (with a maximum score of 100). The points were calculated based on the amount of time participants were tracking the substance-irrelevant image. For each level the high score was stated, so that participants could challenge themselves during the next game to reach a higher score. During each training block, participants were able to track their progress in a green bar shown on the screen. These gamifications were included to enhance motivation and to make the training more appealing.Placebo condition. The placebo condition was designed to be similar to the active training, meaning that the stimuli, the design/layout, the temporal parameters, and the construction of levels were equal to the BITT. Thereby the placebo condition was suited to account for possible exposure effects of the alcohol/cannabis cues, and effects of adding a component to TAU. As the placebo condition was not configured to change attentional patterns towards substance-relevant cues, four squares containing substance-relevant images and four squares containing substance-irrelevant images moved on the screen. Participants were instructed to pay equal attention to all eight moving squares, and on random occasions, one of the images became green-filtered. Participants needed to click on this green-filtered image as quickly as possible. Both types of images became green-filtered equally often (50:50).Stimuli. There were two different task variants for the BITT as well as for the placebo condition\u2013one alcohol and one cannabis version. For each of these two variants two sets of 64 images (500 x 500 pixel) were assembled. The first set of images for both variants was used during all of the training sessions, except for the last session in which the second set of images was used to measure generalization to untrained stimuli. This last set of images was activated by a researcher once the therapists indicated the end of TAU. For the alcohol variant, each of these two sets of images compromised 32 alcohol-relevant images , and 32 alcohol-irrelevant images . For the cannabis variant, each of the two sets of images consisted of 32 cannabis-relevant images , and 32 cannabis-irrelevant images .For each of the four games per session of the alcohol and cannabis variant of the BITT and placebo condition, eight different substance-relevant images and eight different substance-irrelevant images were randomly drawn from the activated set of 64 images, meaning that all 64 images of the first set of images were presented to the participants within each training session, except for the last training session in which all 64 images of the second set of images were presented.AB was measured with the Odd-One-Out assessment task . In thiIn line with the intervention and placebo condition, there was an alcohol and a cannabis variant of the Odd-One-Out assessment task. Each of these variants used three types of stimulus images, and each of these three stimulus sets comprised of 30 images , which were all different from the stimuli that were used for the BITT and placebo condition. For the alcohol variant of the assessment task, the three images types were alcoholic drinks, non-alcoholic-drinks, and flowerpots, whereas for the cannabis variant the three images types were cannabis-related objects, neutral daily devices, and flowers. The images of the flowerpots of the alcohol variant and the images of flowers of the cannabis variant were selected for the purpose of the current study. The other images were used in previous studies , 41. GivFor the alcohol variant of the task, engagement bias was calculated by subtracting the mean reaction time of the alcohol target trials from the mean reaction time of the neutral target in neutral distractors trials . For the cannabis variant, engagement bias was calculated by subtracting the mean reaction time of the cannabis target trials from the mean reaction time of the neutral target in neutral distractors trials . Higher scores thus reflected stronger attentional engagement with alcohol or cannabis cues. Disengagement bias for the alcohol variant was calculated by subtracting the mean reaction time of the neutral target in neutral distractors trials from the mean reaction time of the alcohol distractors trials . For the cannabis variant, disengagement bias was calculated by subtracting the mean reaction time of the neutral target in neutral distractors trials from the cannabis mean reaction time of the cannabis distractors trials . Higher scores reflected stronger difficulty to disengage from alcohol or cannabis cues.Substance use, craving, and depression, anxiety and stress. The frequency of alcohol/cannabis use, the number of standard units of alcohol consumed (for AUD only), craving, and depression, anxiety and stress levels were measured using the relevant parts of the Measurements in Addiction of Triage and Evaluation Questionnaire . The frOther measurements. At baseline, sociodemographic information was collected, including gender, age, level of education, relationship, and work. In addition, participants\u2019 clinical history of addiction, as well as their family history of addiction (first and second grade relatives) was assessed. At the end of the baseline assessment, all participants filled in a short questionnaire about their use of technical devices like computers and mobile phones. Participants who were assigned to the active or placebo training were asked about their expectations concerning the intervention on a 5-point Likert scale after they completed a practice session. In addition, before and after each (active or placebo) training session participants were asked to indicate their level of subjective craving on a visual analogue scale (VAS), varying from 0 (\u201cno craving\u201d) to 100 (\u201cextreme craving\u201d). Direct effects of the training on craving could therefore be established. At 6 FU and 12 FU, participants were asked whether they started to use alcohol/cannabis again , or whether they started to use more alcohol/cannabis than they intended to . If participants answered the question with yes, they were asked to indicate when they have had experienced a relapse . Finally, after the first week of training and at post-test, participants assigned to the (active or placebo) training filled in an evaluation form asking them about their training experiences as indicated on a VAS varying from 0 to 100 (very much).d = 0.5, based on a t-test for independent groups, with a power of 0.8 and an alpha of 0.05, allowing for a dropout rate of 40%. However, due to political and organizational factors the recruitment of participants into the study was delayed. Despite the addition of a fourth treatment center and a series of intense efforts recruitment fell short of this target, and we were able to include a sample of 169 participants. With this sample the power to detect a medium effect size of Cohen\u2019s d = 0.5 at an alpha of 0.05, when allowing a drop-out of 40%, was 0.7.Based on power analysis, the current study aim was to include 213 participants, which affords the capacity to find a difference between groups of a medium effect size of Cohen\u2019s Eligible participants were automatically randomized by the computerized online registration and monitoring tool, called LOTUS. Randomization was stratified for gender, age group , type of addiction, and institution, meaning that participants were automatically assigned to the condition to which the fewest participants of their gender, age group, and type of addiction were already assigned accounting for the institution in which the participants were treated. Randomized intervention assignment was concealed from both patients and therapists. Furthermore, since all assessments took place online, and thus in the absence of the therapists and researchers, the outcome data were blinded for both therapists and researchers. One researcher was aware of the allocation concealment to enable the support of participants in case any technical or personal problems occurred (for more detail see ).n = 142), meaning that participants who provided informed consent but stopped being involved before completing baseline assessment (n = 27) were considered as drop outs. To analyze the data of the included participants over all four measurements, missing data were handled with multiple imputation. The percentage of missing values for the primary outcome variable frequency of use was 29.6% at post-test, 61.3% at 6 FU, and 52.8% at 12 FU. For craving, the percentage of missing values was 51.4% at post-test , 61.3% at 6 FU, and 52.8% at 12 FU. A multiple imputation model was constructed using the R package mice in whicn 3.6.0; ). In SPSn Buuren .r < .5), adequate (.5 \u2264 r < .8), or good (r \u2265 .8) based on commonly reported thresholds [Based on the available data, internal consistency of the OOOT measures at baseline, post-test, 6 FU, and 12 FU was evaluated by using the split-half method to calculate Spearman-Brown coefficients between the first half and the second half of the task. A second method was used to account for a possible learning effect throughout the task. Therefore, Spearman-Brown coefficient was also calculated by distributing the trials alternately to one of two subsets, whereas the first trial of one particular trial type was randomly allocated to either of the subsets. Internal consistency was tested for engagement and disengagement bias, as well as for the trial types. The estimates for the internal consistency were characterized as weak (resholds .SD\u2019s below the mean percentage correct answers were removed (baseline n = 2). As a next step, incorrect responses were excluded from the analyses . Reaction times below 200 ms were considered anticipation errors and were removed from the analyses . Trials scoring 3 SD\u2019s below or above a participant\u2019s average response time of that trial type were removed. This resulted in deleting another 0.5% of trials from the baseline, 0.5% from the post-test, 0.3% from the 6 FU, and 0.4% from the 12 FU. See The approach of data reduction followed established convention adopted in previous research (see for example ). The saSD = 12.69; range 0\u201345) training sessions. Participants of the placebo subgroup completed 9.79 sessions on average. On average, participants of the ABM group unlocked 8.60 of the 12 levels , whereas participants of the placebo subgroup unlocked 10.83 of the 12 levels . Based on all available data of the training sessions, on average, level of subjective craving in the ABM group before the training was 21.92 (SD = 26.47), and 19.70 (SD = 25.25) after. In the placebo subgroup, level of subjective craving was 25.03 (SD = 28.80) before the training, and 23.97 (SD = 26.15) after completing the placebo condition.Based on the original data, sociodemographic information of participants in the ABM group and in the control group are presented in target trials, distractors trials, and neutral target in neutral distractors trials), as well as the mean frequency of substance use, craving, and secondary physical and psychological symptoms are presented per group for baseline, post-test, 6 FU, and 12 FU. No differences were found between the placebo subgroup and the TAU-only subgroup on the primary outcome variables frequency of use and craving . This is consistent with the idea that the placebo training would have no effect on relevant symptoms of substance use disorders (see SD = 2.02) for the ABM group, and 2.99 (SD = 1.91) for the control group . The percentage of participants who reported no relapse was 38.4% in the ABM group, and 34.8% in the control group = 0.36, p = .550).In Tables n = 36 after first week of training, n = 34 at post-test), and the placebo subgroup , after the first week of training, the mean motivation to train regularly was 55.94 in the ABM group, and 57.86 in the placebo subgroup. On average, participants\u2019 judgment about whether or not the training would be helpful with regard to their treatment outcome was 38.89 for the ABM group, and 31.50 for the placebo subgroup. At post-test, the extent of motivation to train on a regular basis throughout the treatment was 51.76 in the ABM group, and 45.20 in the placebo subgroup. The judgment about whether or not the training had a positive influence on their treatment outcome was on average 34.56 in the ABM group, and 31.80 in the placebo subgroup, which was comparable with the answers after the first week. Further, participants in the ABM group gave a mean pleasantness rating following the ABM intervention of 45.41 . For participants of the placebo subgroup this was 46.53 . On average, the fact that the intervention was completed from home was rated as positive . Participant ratings concerning the frequency with which their therapist had asked about the training during TAU, varied from every session (20.5% for the ABM group and 20.0% for the placebo subgroup) to never (29.4% for the ABM group and 31.3% for the placebo subgroup).Based on the available data from the ABM group = 1.66, p = .187, \u03b72p = 0.02), and disengagement bias = 1.98, p = .123, \u03b72p = 0.03). Most important for the context of the current study, there was no interaction of time and condition for engagement bias = 0.96, p = .397, \u03b72p = 0.02), nor for disengagement bias = 0.47, p = .689, \u03b72p = 0.01). This indicates that the change over time in AB was not different between the ABM group and the control group. Effects of generalization to untrained stimuli could not be assessed as only a very small number of participants completed the last training session before the end of TAU.The assumption of sphericity as indicated by Mauchly\u2019s tests was violated for most effects of all four RM-ANOVAs. Therefore, degrees of freedom were corrected using Greenhouse-Geisser estimates of sphericity. The RM-ANOVA testing whether the ABM intervention was successful in manipulating AB, revealed no significant main effect of time for engagement bias = 28.75, p < .001, \u03b72p = 0.27. Repeated contrasts and means revealed that overall there was a significant decrease of the frequency of substance use from baseline to post-test F = 92.01, p < .001, \u03b72p = 0.46 , but no significant change from post-test to 6 FU, F = 2.12, p = .151, \u03b72p = 0.04, and from 6 FU to 12 FU, F = 0.74, p = .391, \u03b72p = 0.01. Further, there was no significant interaction effect between time and condition, F = 0.46, p = .685, \u03b72p = 0.01, indicating that over time the frequency of substance use showed a similar pattern for the ABM group and the control group = 22.49, p = .001, \u03b72p = 0.33. As indicated by the repeated contrasts and means, overall craving decreased significantly from baseline to post-test, F = 17.25, p = .001, \u03b72p = 0.21 , and showed a significant increase from post-test to 6 FU, F = 36.41, p < .001, \u03b72p = 0.46 . The change from 6 FU to 12 FU was non-significant, F = 0.87, p = .353, \u03b72p = 0.01. The interaction effect between time and condition was non-significant, F = 0.75, p = .510, \u03b72p = 0.02, indicating that the development of craving over time was similar for participants in both groups = 79.35, p < .001, \u03b72p = 0.70. Contrasts and means revealed that secondary physical and psychological complaints decreased from baseline to post-test, F = 11.55, p = .001, \u03b72p = 0.15, and significantly increased from post-test to 6 FU, F = 86.46, p < .001, \u03b72p = 0.77. There was no significant change from 6 FU to 12 FU, F = 1.22, p = .278, \u03b72p = 0.03. There was no significant interaction between time and condition, F = 0.57, p = .594, \u03b72p = 0.01. This indicated that changes of symptoms of depression, anxiety, and stress over time did not differ between groups.For the secondary outcome measure, secondary physical and psychological complaints, there was a significant main effect of time, F = 0.60, p = .440; F = 0.63, p = .427, for engagement bias and disengagement bias, respectively), indicating that both groups showed the same pattern over time. To further analyze possible effects of the ABM intervention on the primary outcome variables , we conducted several post-hoc RM-ANOVAs. First, we tested the effects of ABM intervention on substance use and craving when only including patients who completed a substantial number of (active or placebo) training sessions, namely at least six = 0.62, p = .576; F = 0.43, p = .711, respectively for frequency of substance use and craving). In line, there were no significant differences concerning baseline frequency of substance use and craving between participants who completed a maximum of one session compared with participants who completed at least six sessions . Second, when adding the type of used substance to the model as a between-subjects factor, in order to investigate possible differences of the effect of ABM intervention between AUD and CUD, we found no significant three-way interaction between time, condition and type of substance = 0.46, p = .681; F = 0.65, p = .570, respectively for frequency of substance use and craving). Third, we investigated whether there was a difference between groups over time when separately including the subgroups of the control condition into the model. However, there was no significant interaction of time by condition for frequency of substance use = 0.49, p = .789), and craving = 0.34, p = .899). Fourth, we excluded participants from the analysis who reported no days of substance use in the past 30 days at baseline. Patients who already stopped consuming alcohol/cannabis before the start of their therapy can logically not further decrease their use. This could have biased the results, especially because there were double as many non-using participants in the ABM group (n = 10) compared with the control group (n = 5). The results showed that also when excluding these participants from the analysis, there was no significant interaction of time by condition for the frequency of used substance = 0.29, p = .810), or craving = 0.88, p = .444). Finally, we tested the effects of ABM intervention on the number of standard units of alcohol (only including AUD patients). There was no significant interaction between time and condition = 0.44, p = .649), again indicating a similar pattern between both groups.To test whether the ABM training had direct effects on AB, two additional RM-ANOVAs were conducted in which possible direct changes from baseline to post-test for engagement and disengagement bias were investigated. However, the interaction term of time by condition remained non-significant ( example ). HoweveF = 0.44, p = .508). Also when adding the type of substance to the model, the three-way interaction remained non-significant = 0.40, p = .528). In line, for craving, there was no significant difference between the conditions when comparing baseline with 12 FU = 0.63, p = .430), and this result remained non-significant when type of substance was added to the model = 0.57, p = .451). Finally, when testing long term effects of ABM on the number of standard units of alcohol when only including participants with a diagnoses of AUD, no significant interaction between time and condition was found = 0.19, p = .663).After conducting the study, it turned out that the questions with regard to relapse lacked sufficient sensitivity, especially because of the diversity in treatment goal . In addition, we had a high percentage of drop-out. Therefore, there was no solid base to conduct a Cox-regression analysis as was planned and described in the study protocol . HoweverEven after initial successful treatment, patients diagnosed with substance use disorders often relapse . Given tThe current study found no support for the idea that the addition of ABM intervention to CBT-based TAU, would serve to improve treatment outcome for AUD or CUD outpatients in terms of reduced substance use and craving. One explanation for the non-significant findings is that clinical changes may depend on the successful modification of AB , and theAnother explanation for the non-significant findings on treatment outcome is that the BITT may not have targeted the attentional process(es) implicated in addiction symptomology. It seems reasonable to assume that the BITT approach to ABM may primarily targeting difficulty disengaging attention from substance-relevant cues. Participants need to consistently disengage their attention from substance-relevant cues in order to track the single substance-irrelevant cue. Support for this idea can be found in a previous study showing that completing a food version of the BITT resulted in a significant reduction of disengagement bias but not engagement bias . It mighThe current study found no evidence that the BITT component served to reduce relapse. That is, patients who received the ABM intervention did not report lower rates of relapse than patients who did not receive the intervention, and there were no differences in the duration until relapse. Similarly, no long-term differences on substance use and craving were found between groups. These findings contrast with previous studies that have found effects of cognitive bias modification interventions on relapse \u201336, 55. After completing a practice session, the expectations with regard to both trainings, active and placebo, were comparable. In both groups around 60% of the patients expected the training to have a positive influence on their attention, to help them with moderation or abstinence, and to have a positive influence on their overall treatment outcome. Clearly, these results corroborate the credibility of the placebo condition. Besides that, it suggests that a slight majority of patients believed computerized interventions to be helpful in their treatment. However, it also points to the fact that around 40% did not believe in the added value of such an intervention. In line, after treatment, the extent to which the training was experienced as positive and the motivation to train on a regular basis appeared to be rather mixed. This might also explain why the compliance with the training in terms of completed training sessions varied across patients. However, lack of compliance and motivation is no problem of computerized interventions in particular, but a more common problem in therapy, possibly especially in substance use disorders . InThe current findings emphasize the importance of improving treatment outcome in substance use disorders. Within one year after treatment, around 60\u201365% of patients in the current study reported to have experienced a relapse. This finding is in line with previous literature, suggesting that up to 50% of patients treated for substance use disorders relapse within the first year after treatment . FurtherThe current study has several strengths such as the inclusion of a clinical sample of treatment-seeking individuals, the addition of ABM intervention as an add-on to TAU, the accessibility by providing the ABM intervention in the home-environment, the involvement of the therapists to motivate the patients, and the long-term follow-up period until 12 months after end of treatment. There are also some limitations that may bear on the interpretation of the results. First, as described above, the results with regard to relapse might be influenced by the diversity in treatment goals , and the related subjectivity with which participants might have answered the relapse-relevant questions. Thus, we cannot rule out the possibility that the findings might have looked different if relapse had been operationally defined with respect to patients\u2019 own treatment goal. Future studies could either reduce variation due to differing patient goals by including only patients who intend to stay abstinent, or else could take account of such variation by collecting data on participants\u2019 treatment goals, and computing treatment success and relapse with respect to these goals.Second, participants diagnosed with AUD and CUD were combined for the analyses which could have influenced the results if the ABM intervention was effective for one disorder but not for the other. However, given that AB has been found to be associated with treatment outcome in both substance use disorders , 60, we Third, the current study aimed to deliver ABM as an integrated add-on to TAU by actively involving the therapist, but it may not have been the case that therapists integrated the ABM intervention sufficiently. The findings suggest that therapists greatly varied in their tendency to integrate the ABM intervention in the TAU sessions. Although we only had very limited data from the therapists themselves, the data indicated that therapists\u2019 judgment on whether or not they found the ABM intervention to be effective varied a lot. This might explain the variation in compliance with the research protocol. Although the current study invested in the compliance of therapists in several ways , it might be important for future studies to further improve the motivation of therapists to adhere to the protocol, for example by organizing short booster meetings in which the rationale and relevance of the study is repeated.Finally, there was great variability in the number of completed ABM training sessions in the current sample, varying from zero to 45 sessions. On average, patients completed around 12 sessions of ABM, which translates into approximately 120 minutes of training. With regard to the duration in minutes, this training intensity is comparable with a previous study that found effects of a similar intervention on relapse . HoweverBased on this RCT in the Dutch population, the current findings provided no support for the hypothesis that a multi-session ABM intervention as an add-on to CBT-based TAU can contribute to treatment outcome in outpatients diagnosed with alcohol or cannabis use disorders. This raises questions regarding the relevance of AB as a target for treatment in substance use disorders. It can, however, also be that ABM only has an effect when combined with an abstinence treatment goal , or thaS1 Appendix(DOCX)Click here for additional data file.S2 Appendix(DOCX)Click here for additional data file.S3 Appendix(DOCX)Click here for additional data file.S1 Protocol(PDF)Click here for additional data file.S1 Checklist(DOC)Click here for additional data file."} +{"text": "Prenatal smoking exposure has been associated with childhood attention-deficit/hyperactivity disorder (ADHD). However, the mechanism underlying this relationship remains unclear. We assessed whether DNA methylation differences may mediate the association between prenatal smoking exposure and ADHD symptoms at the age of 6\u00a0years.GFI1) region, as determined by bisulfite next-generation sequencing of cord blood samples, mediated 48.4% of the total effect of the association between maternal active smoking during pregnancy and ADHD symptoms. DNA methylation patterns of other genes regions did not exert a statistically significant mediation effect.We selected 1150 mother\u2013infant pairs from the Hokkaido Study on the Environment and Children\u2019s Health. Mothers were categorized into three groups according to plasma cotinine levels at the third trimester: non-smokers (\u2264\u20090.21\u00a0ng/mL), passive smokers (0.21\u201311.48\u00a0ng/mL), and active smokers (\u2265\u200911.49\u00a0ng/mL). The children\u2019s ADHD symptoms were determined by the ADHD-Rating Scale at the age of 6\u00a0years. Maternal active smoking during pregnancy was significantly associated with an increased risk of ADHD symptoms compared to non-smoking after adjusting for covariates. DNA methylation of the growth factor-independent 1 transcriptional repressor (GFI1 mediated the association between maternal active smoking during pregnancy and ADHD symptoms at the age of 6\u00a0years.Our findings demonstrated that DNA methylation of The online version contains supplementary material available at 10.1186/s13148-021-01063-z. The concept of Developmental Origins of Health and Disease (DOHaD) suggests that exposure to environmental stressors during prenatal and early postnatal periods increases susceptibility to adverse health outcomes later in life. It is particularly well known that prenatal smoking exposure can cause adverse health effects not only at birth, but also in the long term after birth. For instance, prenatal smoking exposure increases the risk of several adverse birth outcomes, including infant death , pretermAHRR), cytochrome P450 family 1 subfamily A member 1 (CYP1A1), growth factor-independent 1 transcriptional repressor (GFI1), and myosin IG (MYO1G), that are sensitive to maternal smoking exposure , MYO1G, and GFI1) whose DNA methylation was significantly altered by maternal smoking during pregnancy; DNA methylation rates were measured using bisulfite next-generation sequencing. Next, we evaluated whether DNA methylation differences in these genes mediated the association between prenatal smoking exposure and ADHD symptoms.This study aimed to explore the association among prenatal smoking exposure, ADHD symptoms at preschool age, and cord blood DNA methylation using a prospective birth cohort study, the Hokkaido Study on Environment and Children\u2019s Health. We have previously identified the CpG sites in which cord blood DNA methylation is altered by maternal smoking during pregnancy using the Illumina Infinium HumanMethylation450 BeadChips . In thisMaternal cotinine levels are represented in Table Table AHRR, CYP1A1, ESR1, GFI1, and MYO1G) associated with prenatal smoke exposure. The analyzed regions of five genes are shown in Additional file AHRR and GFI1 . Meanwhile, a significant positive association was observed between maternal smoking exposure during pregnancy and DNA methylation rates on a region of MYO1G and CYP1A1 . However, the methylation rates of CYP1A1 were significantly different before CpG5 (effect size: approximately 3%) and after CpG6 (effect size: approximately 0.3%) . No significant association was observed between maternal smoking exposure during pregnancy and DNA methylation rates both on a region and individual CpG site of ESR1.Based on our previous EWAS results, we focused on the DNA methylation of five genes and GFI1 regions was associated with significantly lower odds of ADHD symptoms. No significant association was observed between DNA methylation rates on AHRR, CYP1A1, and MYO1G and childhood ADHD symptoms. Based on the results of the association between individual CpG methylation and ADHD, we defined CpG clusters in CYP1A1 , ESR1 cluster 1 , and MYO1G clusters 1 and 3 was associated with significantly lower odds of ADHD symptoms. Meanwhile, a one-unit increase (%) in the DNA methylation of MYO1G cluster 2 was associated with significantly higher odds of ADHD symptoms.Next, we examined the association between DNA methylation and childhood ADHD symptoms using logistic regression analysis. Figure\u00a0GFI1 region mediated 48.4% of the total effect of the association between maternal active smoking during pregnancy and ADHD symptoms , we reported that maternal active smoking during pregnancy was significantly associated with an increased risk of total difficulties and hyperactivity/inattention in 5-year-old children . This stAHRR_ cg05575921 and CYP1A1_ cg05549655 by bisulfite sequence using NGS [AHRR in this study correspond to exactly the same sequence as that used in our previous report. The five CpGs of CYP1A1 analyzed in our previous study correspond to CpG1 to CpG5 in this study and MYO1G (cg12803068 and cg04180046) identified from the 450\u00a0K were confirmed to have similar methylation patterns in the NGS analysis. However, the CpGs of ESR1 (cg04063345 and cg15626350) and CYP1A1 (cg23727072 and cg00213123) did not match with the previous results. The possible reasons for this discrepancy are as follows: (1) Differences in methylation analysis methods (450\u00a0K array vs. bisulfite sequencing); (2) Differences in sample sizes ; (3) Different groupings of maternal smoking exposure ; (4) Possible false positives in the 450\u00a0K analysis.In our previous study, we used a different cohort \"Hokkaido Study Sapporo Cohort\" to identify methylation site changes in cord blood due to maternal smoking exposure during pregnancy by a 450\u00a0K array and verified sing NGS . The fivMYT1L and VIPR2 [SLC7A8, MARK2, and SON [ESR1 cluster 1 at birth was associated with significantly lower odds of ADHD symptoms at age 6 regardless of smoking exposure. ESR1, one of two ESR subtypes, is a nuclear receptor that is activated by the sex hormone estrogen. Single nucleotide polymorphisms within the ESR1 gene are associated with neuropsychiatric disorders including ADHD [Esr expression and alteration of the epigenetic status [ESR1 may be a novel potential biomarker of ADHD symptoms. Increased DNA methylation of CYP1A1 cluster 2 at birth was also associated with significantly lower odds of ADHD symptoms at age 6. Since DNA methylation rates are as low as less than 1% . GFI1 is a transcriptional repressor that plays an important role in diverse developmental contexts such as hematopoiesis and oncogenesis. GFI1 is involved in the regulation of the T helper type 1 (Th1)-type immune response as well as the promotion of T helper type 2 (Th2) cell development [GFI1 is involved in the molecular mechanisms of ADHD.This study showed that hypomethylation at hildhood . Hypometh weight . These felopment . ADHD haelopment . There ielopment . HoweverCYP1A1, ESR1, and MYO1G methylation and ADHD symptoms is clearly clustered within the region between active smoking during pregnancy and ADHD symptoms. In contrast, hypomethylation of MYO1G clusters 1 and 3 only partially explained the negative association between active smoking during pregnancy and ADHD symptoms. These results reveal that hypermethylation of MYO1G in active smokers during pregnancy is involved in both increased and weakened risk of ADHD symptoms. However, the amplicons in this study were arbitrarily designed to include methylation sites associated with maternal smoking exposure during pregnancy from a previous EWAS study. Therefore, the research is limited in that the interpretation of methylation in regions and clusters in this study may change depending on the methylation state around the amplicon.DNA methylation analysis by bisulfite sequence using a next-generation sequencer can clarify the methylation state of CpG around the probe of the methylation array. In addition to the methylation analysis of individual CpGs, we analyzed the average methylation of all CpGs contained in the amplicon (defined as a region). It is also important to consider smoking- or ADHD-associated differentially methylated regions (DMR). The association among MYO1G regions analyzed in this study are located in a CpG island and exon 21 near the 3\u2032 gene region. Hypermethylation in this gene region might correlate with active transcription of MYO1G [GFI1 analyzed in this study is located in intron 3 and exon 4 is associated with DNA methylation differences in children diagnosed with ADHD [Adverse environmental conditions during the fetal period to early childhood are linked to an increased risk of non-communicable diseases in adulthood. This concept is called DOHaD. Epigenetic modifications, such as DNA methylation, histone modification, and non-coding RNA, are thought to be molecular mechanisms of DOHaD. A limited number of studies have reported on outcomes other than birth weight. Parmar et al. reportedith ADHD . To the This study has several limitations. First, ADHD suspected symptoms were not diagnosed but screened by ADHD-RS questionnaire; hence, there is a possibility of misclassification. However, previous studies have confirmed the reliability and validity of ADHD-RS for screening children in Japan , 47. SecFinally, DNA methylation patterns differ between tissues and cell types. We do not know whether the methylation changes in cord blood DNA also occur in brain tissue DNA. However, correlation of DNA methylation between blood and brain, and association between blood DNA methylation and brain phenotypes have been reported , 52. ThuGFI1 mediated the effect of maternal active smoking on ADHD symptoms.Our findings, taken together, have demonstrated that maternal active smoking during pregnancy was associated with altered DNA methylation and ADHD symptoms in children of preschool age. DNA methylation of This prospective birth cohort study was a part of the Hokkaido Study on the Environment and Children\u2019s Health. The study design and procedures have been described previously in detail , 54. AmoPlasma cotinine levels at the third trimester of pregnancy were measured using a highly sensitive enzyme-linked immunosorbent assay kit . The detailed protocol has been described previously . Based oth percentile score for 5\u20137 yeas children , to extract suspect-ADHD children.The ADHD-RS IV (home version) was desiAHRR, 17 for CYP1A1, 11 for ESR1, 21 for GFI, and 20 for MYO1G. The methylation rates of individual CpGs were used as a percentage. The methylation rates of each gene region were obtained by calculating the average of all CpGs in the amplicon. CpG clusters were defined as follows: in CYP1A1, CpG1 to CpG5 formed cluster 1 and CpG15 to CpG17 cluster 2 . DNA was subjected to bisulfite conversion by using an EZ DNA Methylation-Lightning Kit . Bisulfite-treated DNA was then amplified using FastStart Taq DNA Polymerase . Polymerase chain reaction (PCR) primers for bisulfite PCR were designed using MethPrimer , parity, gestational age, and infant sex were selected as possible confounders , 57. TheFinally, a mediation analysis was used to estimate the degree of the association between prenatal smoking exposure and ADHD symptoms, which is explained by DNA methylation changes. The direct effect was the effect of the exposure (X) on the outcome (Y) at a fixed level of the mediator (M). The indirect effect of X on Y through M can be quantified as the product of two coefficients: a (the effect of X on M) and b (the effect of M on Y) pathways . Percent mediation was calculated as the indirect effect divided by the total (indirect\u2009+\u2009direct) effect\u2009\u00d7\u2009100%. Mediation analysis included maternal alcohol consumption during pregnancy, family income, pre-pregnancy BMI, parity, gestational age, and infant sex as covariates. The bias-corrected and accelerated CIs of the indirect effect (ab) were calculated by bootstrapping with 5000 iterations . MediatiAdditional file 1: Fig. S1. Base sequences analyzed by targeted bisulfite next-generation sequencing. Fig. S2. Comparison of methylated CpG sites among non-smokers, passive smokers, and active smokers. Fig. S3. Selection of the study population. Table S1. Association between maternal smoking during pregnancy and umbilical cord blood DNA methylation. Table S2. Association of umbilical cord blood DNA methylation with ADHD symptoms at 6\u00a0years of age. Table S3. Mediation analysis for the effect of DNA methylation in the association between active smoking during pregnancy and ADHD symptoms at 6\u00a0years of age. Table S4. List of bisulfite PCR primers."} +{"text": "The crystal structures, morphology, surface species, and electrochemical performances of both cathode active materials are studied by scanning electron microscopy (SEM), X-ray diffraction (XRD), X-ray photoelectron spectroscopy (XPS), and charge-discharge tests. The XRD patterns and XPS results identify the presence of sulfate groups on the surface of NCMS. While pristine NCM exhibits a very dense surface in SEM images, NCMS has a relatively porous surface, which could be attributed to the sulfate impurities that hinder the growth of primary particles. The charge-discharge tests show that discharge capacities of NCMS at C-rates, which range from 0.1 to 5 C, are slightly decreased compared to pristine NCM. In dQ/dV plots, pristine NCM and NCMS have the same redox overvoltage regardless of discharge C-rates. The omnipresent sulfate due to the sulfuric acid leaching of spent LIBs has a minimal effect on resynthesized NCM cathode active materials as long as their precursors are adequately washed.In order to examine the effect of excessive sulfate in the leachate of spent Li-ion batteries (LIBs), LiNi Li-ion batteries (LIBs) have been extensively used in various portable electronics and electric vehicles in combination with high energy and power density ,2. Howev4, CoSO4, and MnSO4, the investigation on the effect of sulfate in pristine and resynthesized cathode active materials would be essential.In the recycling process of batteries, the pretreatment of spent batteries, including discharge, dismantling, classification, and separation, usually precedes the hydrometallurgy-based recycling process . The hydxCoyMnzO2 (NCM) enhanced the electrochemical performance of the cathode active materials, in which sulfur was incorporated into NCM by calcination [2SO4 phase, which would provide fast diffusion channels for lithium ions on the surface. Recently, Li et al. examined the concentration gradient S-doped NCM and argued that a proper amount of sulfate stabilizes the crystal structure with good cycle performance [Previously, Ban et al. found that sulfur in LiNicination . Interesformance . However1/3Co1/3Mn1/3O2 (pristine NCM) and sulfate-containing LiNi1/3Co1/3Mn1/3O2 (NCMS) using co-precipitation in order to investigate the effect of excessive sulfate in the leachate for the NCM resynthesis. Since an actual LIB leachate could contain various types of unidentified impurities, we prepare a simulated LIB leachate with 4 M of extra lithium sulfate as a sulfur source. Our previous report, on the effect of residual lithium in resynthesized NCM, revealed that lithium originating from lithium sulfate in a simulated leachate hardly affects the LIB performance as long as the NCM precursors are washed appropriately [In this work, we synthesize LiNipriately . Thus, w2 and NCM as cathode materials.1/3Co1/3Mn1/3(OH)2 and sulfate-containing Ni1/3Co1/3Mn1/3(OH)2 were synthesized using the co-precipitation method. The composition of actual leachate of spent LIBs from a LIB recycling company was considered to simulate the amount of sulfur in the actual leachate. A total of 2 M of ammonia solution as a chelating agent and 1.5 M of metal solution ) were pumped into a continuous stirred reactor. 2 M of NaOH solution was automatically injected into the reactor by a pH-controlled pump to maintain a pH of 11.52. The reactor was kept at a temperature of 40 \u00b0C and a stirring speed of 1000 rpm for about 75 h. The resultant precursors were filtered and washed with distilled water several times and dried in an oven at 80 \u00b0C. The final cathode active materials (pristine NCM and NCMS) were prepared by calcinating a mixture of the hydroxide precursors and Li2CO3 as a lithium source at 1000 \u00b0C for 8 h under air atmosphere. In order to identify the crystal structure of the NCM and NCMS materials, an XRD technique was carried out with a step size of 0.026\u00b0 in a 2\u03b8 range from 10\u00b0 to 80\u00b0. The morphological characterization of the materials was performed using a field emission SEM . XPS was used to examine the presence of sulfur in the structure of NCMS.Ni6 in a mixture of ethyl methyl carbonate and ethylene carbonate (2:1 volume ratio) as an electrolyte. Charge-discharge tests were performed from 3.0 to 4.3 V (vs. Li/Li+) at room temperature.Electrochemical properties were investigated using CR2032-type coin cells, which were fabricated in a moisture-controlled glove box under argon atmosphere. Cathodes were prepared by mixing the cathode active materials, polyvinylidene fluoride (KF 1100) binder, and carbon black (Super-P) in a mass ratio of 95:3:2 respectively. Cells were integrated with the prepared cathodes, lithium metal as an anode, polyethene film as a separator, and 1 M LiPF3 titration for Cl, and barium sulfate precipitation for S. F can originate from residual electrolytes such as LiPF6 [as LiPF6 , and Cl as LiPF6 . Since sas LiPF6 . This suThe SEM images in 2 with no impurity phase [2 type with the space group R-3m [2SO4 impurity phase around 22\u00b0 peak is observed in NCMS. This result indicates that some sulfate impurities are still present in the precursors after filtering and washing, and these sulfate impurities appear in the cathode active materials after calcination and could lead to poor performance in a charge-discharge test. These weak peaks related to Li2SO4 phase are also proven by the presence of sulfur in the following XPS analysis (3/2 of the SO42\u2212 groups [ty phase . There ioup R-3m ,28. Howeanalysis . Figure \u2212 groups . This re\u22121 for pristine NCM and NCMS, respectively. During the initial charge to 4.3 V vs. Li/Li+, a gentle slope below 3.9 V occurs with the removal of lithium from NCM, which accompanies the oxidation of Ni2+/Ni4+ and Co3+/Co4+ [4+/Ni2+ at 3.74 and 3.76 V, respectively. The reduction peaks gradually shift to lower potentials as the C-rate during the discharge increases. However, the difference in the position of reduction peaks is very small between pristine NCM and NCMS. The capacity retention of pristine NCM and NCMS cycled at 1 C showed superior cyclability over 98% after 50 cycles in both samples (see The electrochemical performance of the charge-discharge profiles for pristine NCM and NCMS is presented in o3+/Co4+ . Althougples see . TherefoIn this work, pristine NCM and NCMS are synthesized by co-precipitation and the effect of excessive sulfate is investigated on their structure, morphology, and electrochemical properties. The presence of sulfate in NCMS is examined by XRD and XPS results. SEM results show that NCMS has a porous surface and more voids than pristine NCM, which may cause the structural instability and deteriorate the electrochemical performance. In charge-discharge tests at different C-rates, the discharge capacities of NCMS at each C-rate is slightly decreased compared to pristine NCM. In summary, the unavoidable presence of sulfate, which originates from the sulfuric acid leaching of spent LIBs, has a minimal effect on resynthesized NCM cathode active materials as long as their precursors are adequately washed."} +{"text": "Significance: Hyperspectral imaging (HSI) has emerged as a promising optical technique. Besides optical properties of a sample, other sample physical properties also affect the recorded images. They are significantly affected by the sample curvature and sample surface to camera distance. A correction method to reduce the artifacts is necessary to reliably extract sample properties.Aim: Our aim is to correct hyperspectral images using the three-dimensional (3D) surface data and assess how the correction affects the extracted sample properties.Approach: We propose the combination of HSI and 3D profilometry to correct the images using the Lambert cosine law. The feasibility of the correction method is presented first on hemispherical tissue phantoms and next on human hands before, during, and after the vascular occlusion test (VOT).Results: Seven different phantoms with known optical properties were created and imaged with a hyperspectral system. The correction method worked up to 60\u00a0deg inclination angle, whereas for uncorrected images the maximum angles were 20\u00a0deg. Imaging hands before, during, and after VOT shows good agreement between the expected and extracted skin physiological parameters.Conclusions: The correction method was successfully applied on the images of tissue phantoms of known optical properties and geometry and VOT. The proposed method could be applied to any reflectance optical imaging technique and should be used whenever the sample parameters need to be extracted from a curved surface sample. In the biomedical optics field, such algorithms were realized for the spatial frequency-domain imaging (SFDI) method. SFDI method projects specific illumination patterns on imaged objects, allowing the extraction of optical properties.In this work, the effect of the surface curvature and distance correction algorithm on sample properties extracted from the images obtained by a pushbroom HSI system combined with a 3D laser profilometer is studied. The common hyperspectral image analysis pipeline22.1The HSI system was a custom-build pushbroom system. The core of the system is an imaging spectrograph ImSpector V10e with a slit size of Imaging was performed in reflectance mode. A custom-made LED illumination system was developed. The illumination is composed of four LED panels distributed symmetrically across the scanning line, as shown in The recorded raw spectra are converted to the normalized reflectance spectrum The hyperspectral system is combined with a 3D profilometry (3DP) module. It is composed of a laser projector and a monochromatic camera. The laser line is parallel to the hyperspectral acquisition line and has a fan angle of 65\u00a0deg and a line width of 0.3\u00a0mm. The offset between the laser line and hyperspectral system is 1\u00a0mm to reduce laser affecting the hyperspectral image. The laser line was recorded by a monochromatic camera with a resolution of 3DP system was calibrated using a custom-build reference object of known geometry. The estimated resolution of the system is 0.1\u00a0mm in the ,Due to the parallax between the 3DP camera and the laser projector, the shadowing of the laser line is present. Therefore, some surface regions are not illuminated by the laser and cannot be reconstructed. To provide the complete sample surface, the missing values are interpolated. The Laplace interpolation technique was used for this purpose in our study.2.2The image correction method is described in detail in Ref.\u00a0The described corrections were performed for the case of vertical illumination , which is polymerized from two liquid parts, namely part A and part B. Therefore, it can reap the benefits of liquid phantoms such as easy customization of each optical layer by adding small quantities of absorber or scatterer.The tissue phantoms were prepared from SiliGlass according to a slightly modified recipe of Sekar et\u00a0al.Here, only a brief description of the phantom preparation is provided. An interested reader can find more information in Refs.\u00a0Seven different tissue phantoms with absorber concentrations The refractive index of the phantoms was reported in Ref.\u00a0The absorption coefficients of the phantoms depend on the pigment concentration The scattering coefficient and anisotropy factor ospheres . The vol2.4The correction method was also tested on biological tissues, namely on human hand images. Five healthy volunteers aged 23 to 25 were imaged with the hyperspectral system. The procedure was performed according to the Declaration of Helsinki. The experimental protocol was approved by the Slovenian National Medical Ethics Committee. Informed consent was obtained from the healthy subjects included in this study.Their hands were imaged before, during, and after the vascular occlusion test (VOT) to observe hemodynamic changes. A cuff was placed on their right upper arms. The hands were placed in the HSI system with the fingers spread as much as possible to reduce the light inter-reflection between the adjacent fingers. First, the baseline image of the fingers was recorded. The cuff was then inflated to over 200\u00a0mmHg to induce total blood flow occlusion. After 150\u00a0s, next image was recorded . Finally, the cuff was released, and the third image was acquired . The fingers were imaged in the region between the MCP and DIP joints. The small imaging area was chosen to prevent long imaging times.2.5,To extract physiological parameters from reflectance spectra, the inverse problem of light propagation in turbid media has to be solved. Models are divided into two groups of iterative and noniterative models, where iterative are most commonly used. Such methods use equations in which optical properties (absorption and scattering coefficients) are directly connected with the parameters thta are being evaluated .,In our research, the tissue parameters were extracted using the IAD method.In this research, GPU-accelerated one-layer and two-layer IAD were used on tissue phantom and human hand hyperspectral images in the spectral range 430 to 700\u00a0nm. Incoming and outcoming light was divided into 20 conical fluxes to provide the necessary accuracy. For the nonlinear least-squares fitting, the Levenberg\u2013Marquardt algorithm was implemented on GPU with a maximum number of iterations of 200. Five hundred spectra were fitted at once with a 5-nm step. The corrected and uncorrected normalized images were first binned eight times in the spatial and six times in the spectral dimension to reduce the computational time.The one-layer model was used to simulate light propagation in the tissue phantoms. The fitted parameters were iliGlass , and \u03bcabfraction . The denfraction and the The scattering coefficient was calculated as fficient . The aniA two-layer skin model was used to extract physiological parameters from the recorded human skin spectra. The top skin layer was a thin epidermis layer with melanin as the main chromophore, and the bottom layer was the thick dermis with blood, bilirubin, and cytochromes as absorbers.Absorption coefficient of the epidermis is calculated using the customary relationsThe absorption coefficient for dermis is obtained by combining the blood, the cytochrome C oxidase, and the bilirubin absorption coefficients with the baseline absorption in a manner analogous to Eq.\u00a0(10): The reduced scattering coefficient of the epidermis and dermis is described as the customary ansatz suitable for the relatively narrow spectral range used in this study:The anisotropy factor was in vivo. The boundary values of cytochrome were taken from the publications of Bale et\u00a0al.Six of the skin parameters were the free parameters and were determined by fitting. These parameters including their lower and upper boundaries are presented in Since the extracted sample parameters can depend on the selection of the initial parameters for IAD due to the local minima, we first selected a characteristic ntervals . The ext33.1When imaging a sample with spectral imaging, the inclined and more remote regions of the sample have underrated irradiance causing spectral alterations. An example of these artifacts is presented in Adding\u2013doubling algorithm was used to extract the absorber d images . Tables\u00a0a circle .The extracted microspheres concentration distribution maps are presented in phantom .In general, the phantoms with low absorption are more affected by the artifacts than those with high absorption, whereas the scattering coefficient does not have such a significant effect on the extracted properties, at least in the range used in this study. For a more detailed view, Evidently, the correction significantly improves the flatness of the absorber and scattering concentrations. The correction fails to be effective close at the hemisphere boundary is presented in Adding\u2013doubling was used to extract tissue properties from the uncorrected and corrected images. An example of the parameter distribution maps for melanin, deoxyhemoglobin, and oxyhemoglobin is shown in The elevated concentration regions remain in the area of the joint folds, which is due to the interreflection artifactThe deoxyhemoglobin (deOxy) maps show expected trends in uncorrected and corrected image sets. The baseline deoxyhemoglobin is relatively low; it increases during the VOT because of the oxygen consumption and blocked flow of the oxygenated blood and drops almost to zero in the after phas,e because the fresh oxygenated blood reperfuses the affected limb. However, in the uncorrected maps, the central regions of the fingers show higher deoxygenated blood concentrations, whereas the concentrations monotonically decrease at the regions closer to the finger\u2019s boundaries. In the corrected maps, the distribution is much more uniform; the continuous decrease of the concentration by moving away from the central part is not present.The oxygenated hemoglobin maps show the opposite trends as the deoxygenated hemoglobin maps. Here, the concentration decreases during the test and significantly increases after the cuff removal. Similar to the deoxyhemoglobin maps, the uncorrected images are affected by the artifacts in the lateral regions, whereas in the corrected images, the finger areas are much more homogeneous. Due to the inter-reflection elevated concentration regions are presented at the finger boundaries and in the skin folds. Overall, the calculated values agree well with the values found in Refs.\u00a0To illustrate the curvature correction efficacy, the mean values and standard deviations of the differences between the corrected and uncorrected image parameters were calculated from the flat see ."} +{"text": "Background: Pancreatic ductal adenocarcinoma (PDAC) is one of the most malignant tumors with a poor prognosis. Recently, necroptosis has been reported to participate in the progression of multiple tumors. However, few studies have revealed the relationship between necroptosis and PDAC, and the role of necroptosis in PDAC has not yet been clarified.Methods: The mRNA expression data and corresponding clinical information of PDAC patients were downloaded from the TCGA and GEO databases. The necroptosis-related genes (NRGs) were obtained from the CUSABIO website. Consensus clustering was performed to divide PDAC patients into two clusters. Univariate and LASSO Cox regression analyses were applied to screen the NRGs related to prognosis to construct the prognostic model. The predictive value of the prognostic model was evaluated by Kaplan-Meier survival analysis and ROC curve. Univariate and multivariate Cox regression analyses were used to evaluate whether the risk score could be used as an independent predictor of PDAC prognosis. Gene Ontology (GO), Kyoto Encyclopedia of Genes and Genomes (KEGG) and single-sample gene set enrichment analysis (ssGSEA) were used for functional enrichment analysis. Finally, using qRT-PCR examined NRGs mRNA expression in vitro.Results: Based on the TCGA database, a total of 22 differential expressed NRGs were identified, among which eight NRGs that may be related to prognosis were screened by univariate Cox regression analysis. And CAPN2, CHMP4C, PLA2G4C and STAT4 were further selected to construct the prognostic model. Kaplan-Meier survival analysis and ROC curve showed that there was a significant correlation between the risk model and prognosis. Univariate and multivariate Cox regression analyses showed that the risk score of the prognostic model could be used as an independent predictor. The model efficacy was further demonstrated in the GEO cohort. Functional analysis revealed that there were significant differences in immune status between high and low-risk groups. Finally, the qRT-PCR results revealed a similar dysregulation of NRGs in PDAC cell lines.Conclusion: This study successfully constructed and verified a prognostic model based on NRGs, which has a good predictive value for the prognosis of PDAC patients. Pancreatic ductal adenocarcinoma (PDAC) is the most malignant tumor of the digestive tract . AlthougCell death is one of the most concerned fields in tumor research. Necroptosis is one of the regulated forms of cell death unmediated by caspases . The morHence, the purpose of this study is to analyze the differential expression of necroptosis-related genes (NRGs) in normal tissues and PDAC tissues, and to construct a risk prognosis model based on NRGs by univariate and least absolute shrinkage and selection operator (LASSO) Cox regression analyses, which could provide accurate prognosis prediction for PDAC patients.https://portal.gdc.cancer.gov/repository) to be used as training cohort to establish the prognostic model. GSE57495 dataset, containing the mRNA expression profile and related clinical features of 63 PDAC patients, was downloaded from NCBI Gene Expression Omnibus (GEO) (https://www.ncbi.nlm.nih.gov/geo/) to be used as test cohort to validate the model. In addition, 147 NRGs obtained from the CUSABIO website (http://www.cusabio.cn/pathway/Necroptosis.html) were listed in The mRNA expression data of 182 samples and corresponding clinical information were downloaded from The Cancer Genome Atlas (TCGA) website network of the differentially expressed NRGs was constructed by the STRING database (http://string-db.org/).The \u201climma\u201d R package was used to analyze the differential expression of NRGs between the normal tissues and PDAC tissues in the TCGA cohort. Based on the survival time and status of PDAC patients in the TCGA cohort, the \u201cConsensusClusterPlus\u201d R package was used to carry out the consensus clustering analysis. Due to the randomness of k-means clustering analysis, the clustering index \u201ck\u201d was increased from 2 to 10 to determine the clustering index with the least interference and the largest difference between clusters. The survival curve was conducted using the \u201csurvival\u201d and \u201csurvminer\u201d R packages. Then, NRGs were further differentially analyzed based on different clusters using the\u201climma\u201d R package (|Log2(FC)|>0.585 and FDR<0.05). And the heatmap was plotted to show the relationship between NRGs and clinical features.p-value was set to 0.25 and 8 genes were selected for subsequent study. Then, LASSO Cox regression analysis, which could reduce the risk of overfitting, was used to establish the prognostic model. The optimum \u03bb was chosen by the minimum criteria of the penalized maximum likelihood estimator in a 10-fold cross-validation. In this model, the risk score of each PDAC patient in TCGA and GEO cohorts was calculated by multiplying the coefficients of the gene and the expression of genes. The principal component analysis (PCA) was performed by using the \u201cRtsne\u201d R package. Subsequently, these patients were divided into two groups (high-risk and low-risk groups) according to the median risk score. The predictive accuracy of this model was evaluated by survival analysis and time-dependent ROC curve, which were performed by the \u201cSurvminer\u201d R package and the \u201ctimeROC\u201d R package, respectively. Finally, we conducted the univariate and multivariate Cox regression analyses to assess the prognostic value of the risk model and whether the model could be served as an independent prognostic predicting factor.The univariate Cox regression analysis was utilized to evaluate the prognostic value of NRGs. To avoid omission, the threshold Based on the risk subgroups, the differentially expressed genes (DEGs) with log2(FC) > 1 or < \u22121 and FDR <0.05 in the TCGA cohort were obtained. Then, the \u201cclusterProfiler\u201d R package was subjected to the Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) enrichment analyses. To assess the status of 16 kinds of immune cells and 13 kinds of immune-linked functions, we performed the single-sample gene set enrichment analysis (ssGSEA) to calculate the immune score using the \u201cgsva\u201d R packet.2.The human normal pancreatic ductal epithelial cell line (HPDE6-c7) and human pancreatic cancer cell line (PANC-1) were cultured in DMEM (Gibco) supplemented with 10% fetal bovine serum and 1% penicillin and streptomycin (Gibco). The human pancreatic cancer cell line (AsPC-1) was maintained in RPMI-1640 medium (Gibco) containing 10% FBS and 1% penicillin and streptomycin. All cell lines were purchased from the Type Culture Collection of the Chinese Academy of Science . All cell lines were cultured in a humidified incubator at 37 \u00b0C with 5% COvia the PrimeScriptTM RT reagent kit (Takara). Then, SYBR Premix Ex Taq (Takara) was used to perform qRT\u2013PCR assays. GAPDH was used as the internal control, and the relative expression levels were calculated by the 2\u2212\u0394\u0394Ct method. Primers sequences are shown in Total RNA of cells was extracted using RNAiso Plus Reagent (Takara) and was reverse-transcribed to cDNA https://www.proteinatlas.org/). The relationship between NRGs expression and overall survival was obtained from the GEPIA database.The protein levels of NRGs were examined by immunohistochemical results obtained from HPA (Human Protein Atlas) database , Grade and riskScore were remarkably related to OS (p = 0.038) and riskScore were the independent predicting factors and low-risk groups (n = 45) containing 63 PDAC patients was used as a validation cohort. According to the risk score, 63 patients were divided into the high-risk ((n = 45) . Patient(n = 45) . And theTo explore the function of genes in different risk subgroups, we identified 259 DEGs in the TCGA cohort and 257 + T cells, T helper (Th) cells, T follicular helper (Tfh) cell, Th1 cells, mast cells, neutrophils, natural killer (NK) cells, plasmacytoid DC (pDC), tumor-infiltrating lymphocyte (TIL) compared to the low-risk group , CD8We first detected the mRNA levels of four NRGs in cell lines. As shown in Subsequently, we further explored the protein levels of the four NRGs between normal tissues and PDAC tissues and the relationship between NRGs expression and overall survival of PDAC patients by the HPA and GEPIA databases, respectively. The results showed that the PDAC tissues had higher protein levels of CAPN2 and CHMP4C than normal tissues . FurtherNecroptosis is an important programmed cell death, which is characterized by the activation of MLKL/pMLKL by the RIPK1/RIPK3-mediated phosphorylation signal pathway . Recent In the current study, we first identified 22 differentially expressed NRGs in normal tissues and PDAC tissues. Consensus clustering analysis is an effective method to identify different subtypes of tumors and survival patterns . Based oin vivo and in vitro.Subsequently, we constructed a risk prognostic model composed of four NRGs by using univariate and LASSO Cox regression analyses. In the prognostic model, the survival time of patients in the high-risk group was significantly less than that in the low-risk group. Further univariate and multivariate Cox regression analyses showed that the risk score of the prognostic model could be used as an independent prognostic factor for patients with PDAC. Genes in this model have been reported in a few studies of tumors. Calpain 2 (CAPN2) is one of the most important members of the calpain family. It has been reported that CAPN2 is involved in the development of a variety of tumors, including liver cancer, gastric cancer and so on . In pancThe molecular mechanism of necroptosis is a new hot spot in tumor research, but the relationship between tumor immunity and necroptosis is rarely reported. Previous studies have also shown that the presence of T cell infiltration predicts a better prognosis . MeanwhiAlthough our study constructed an effective prognostic model for predicting the prognosis of PDAC patients, there are still some limitations. First of all, we need more clinical data and prospective studies are needed to verify the clinical effectiveness of this model. In addition, further cellular and animal experiments are needed to reveal the function and specific molecular mechanism of NRGs on the progression of PDAC. Finally, we have only done a preliminary theoretical study on the relationship between NRGs and immune status, and more basic experiments are also needed.In conclusion, we comprehensively and systematically analyzed the expression of NRGs in PDAC and constructed a novel risk prognostic based on four NRGs. This model could be used as an independent prognostic factor for PDAC and well predict the prognosis of PDAC patients."} +{"text": "Campylobacter (C.) species are the most common bacterial cause of foodborne diarrhea in humans. Despite colonization, most animals do not show clinical signs, making recognition of affected flocks and disruption of the infection chain before slaughter challenging. Turkeys are often cocolonized with C. jejuni and C. coli. To understand the pathogen-host-interaction in the context of two different Campylobacter species, we compared the colonization patterns and quantities in mono- and co-colonized female commercial turkeys. In three repeated experiments we investigated the impact on gut morphology, functional integrity, and microbiota composition as parameters of gut health at seven, 14, and 28 days post-inoculation.Campylobacter colonization, clinical signs or pathological lesions were not observed. C. coli persistently colonized the distal intestinal tract and at a higher load compared to C. jejuni. Both strains were isolated from livers and spleens, occurring more frequently in C. jejuni- and co-inoculated turkeys. Especially in C. jejuni-positive animals, translocation was accompanied by local heterophil infiltration, villus blunting, and shallower crypts. Increased permeability and lower electrogenic ion transport of the cecal mucosa were also observed. A lower relative abundance of Clostridia UCG-014, Lachnospiraceae, and Lactobacillaceae was noted in all inoculated groups compared to controls.Despite successful C. jejuni affects gut health and may interfere with productivity in turkeys. Despite a higher cecal load, the impact of C. coli on investigated parameters was less pronounced. Interestingly, gut morphology and functional integrity were also less affected in co-inoculated animals while the C. jejuni load decreased over time, suggesting C. coli may outcompete C. jejuni. Since a microbiota shift was observed in all inoculated groups, future Campylobacter intervention strategies may involve stabilization of the gut microbiota, making it more resilient to Campylobacter colonization in the first place.In sum, The online version contains supplementary material available at 10.1186/s13099-022-00508-x. Campylobacter (C.) pose a substantial public health risk on a global scale per production week (PW) in experiment three, PW 1\u20137: n\u2009=\u200918, PW 8: n\u2009=\u200912, PW 9\u201310: n\u2009=\u20096. At six weeks of age, turkey poults were mock-, C. coli, C. jejuni, or coinoculated.Additional file 2. Cecal heterophil counts of Campylobacter-free and Campylobacter-inoculated female turkeys. Values represent average cecal heterophil counts at seven, 14, and 28 days after mock-, C. coli, C. jejuni, or coinoculation from three repeat experiments, n\u2009=\u20096. Heterophils were counted in ten randomly selected epithelial regions per specimen at 400x magnification.Additional file 3. Ussing chamber buffer composition. Chemical composition of the mucosal and serosal buffer solutions used for Ussing chamber experiments to investigate the functional intestinal integrity and transport properties of turkey ceca. The buffers had an osmolality of 296 and 297 mOsm/kg, respectively, and a pH between 7.45 and 7.47 when flushed with carbogen gas. They were warmed to 37\u00a0\u00b0C."} +{"text": "One thing is certain though: back pain in children and adolescents greatly impacts the everyday lives of these young individuals and their medical care . Like ad\u201cWhy is there a need for special consideration of back pain in childhood and adolescence?\u201dIn the current absence of evidence, it is important to ensure that this population of young individuals are not exposed to under- (or over-) diagnosis or to over-treatment. This balance is truly difficult, as many questions regarding the origin and treatment of back pain remain unanswered. To guide clinical practice, Frosch et al. have pro\u201cWhat is the difference between back pain in younger children and in adolescents?\u201dWe see an increasing prevalence of \u201cnon-specific\u201d back pain in adolescence ,5. Even Developing a reliable differential diagnosis of specific and non-specific back pain, including validating red flags;Clarifying the indications and procedure of imaging and multidisciplinary diagnostics;Optimizing non-drug treatments for non-specific back pain;Improving the prevention of back pain;Avoiding chronicity of back pain;Improving the self-management of non-specific back pain in children and adolescents.Despite numerous scientific efforts to emphasize the problem of back pain in children and adolescents and to improve diagnostics and therapy, many research questions remain unanswered . Future In this Special Issue, we address these challenges of back pain in children and adolescents and give an overview of the current state of knowledge and future needs."} +{"text": "Extracellular vesicles (EVs) are nanovesicles that are naturally released from cells in a lipid bilayer-bound form. A subset population with a size of 200 nm, small EVs (sEVs), is enticing in many ways. Initially perceived as mere waste receptacles, sEVs have revealed other biological functions, such as cell-to-cell signal transduction and communication. Besides their notable biological functions, sEVs have profound advantages as future drug modalities: (i) excellent biocompatibility, (ii) high stability, and (iii) the potential to carry undruggable macromolecules as cargo. Indeed, many biopharmaceutical companies are utilizing sEVs, not only as diagnostic biomarkers but as therapeutic drugs. However, as all inchoate fields are challenging, there are limitations and hindrances in the clinical translation of sEV therapeutics. In this review, we summarize different types of sEV therapeutics, future improvements, and current strategies in large-scale production. Surpris+/CD9+ EVs. Lastly, nomenclatures based on cellular origins are often used, such as apoptotic bodies.Extracellular vesicles (EVs) are particles surrounded by lipid bilayers, naturally released from cells . AlthougsEVs are currently receiving a great amount of attention as a promising therapeutic tool for diseases with high unmet medical needs due to their (i) excellent biocompatibility begetting low immunogenicity, (ii) high stability for the in vivo transport of substances, and (iii) the potential for loading a myriad of macromolecules as cargo . FurtherAlthough sEVs have shown versatility and high potential as a disease treatment at the pre-clinical level, limitations of the practical application in clinical practice remain . TherefoThe current sEV therapeutics include utilizing na\u00efve/engineered sEVs and suppressing the secretion/uptake of sEVs, as shown in Emerging evidence points to sEVs having roles in human diseases. sEVs disseminate diseases by transferring pathological cargo from diseased donor cells to normal cells. For instance, cancer cell-derived sEVs are associated with tumor progression and metastasis ,8,9 TherIntracellular sEV production is predominantly based on two pathways: endosomal sorting complexes required for transport machinery (ESCRT)-dependent and -independent pathways. In the former case, multi-vesicular bodies (MVBs) are formed by ESCRT, and intraluminal vesicles (ILVs) contained therein are released in the form of EVs outside the cell. In the ESCRT-independent pathway, MVBs-ILVs are formed by neutral sphingomyelinase 2 (nSmase2) through sphingomyelinase hydrolysis and ceramide formation . AccordiAnother strategy to inhibit the propagation of sEVs is to inhibit uptake in recipient cells. The primary mechanism of sEV uptake is associated with the endocytosis pathway, which is divided into clathrin-dependent and -independent mechanisms . FurtherNa\u00efve sEVs, or native sEVs, reflect diverse characteristics, such as membrane proteins or contents, of their parental cells. By leveraging this property, various studies reported the potential of na\u00efve sEVs\u2019 therapeutic efficacy. Depending on the origin of cells, sEVs can be utilized in appropriate diseases. In this section, we explore various therapeutic effects of na\u00efve sEVs derived from stem cells, immune cells, and other cells, such as red blood cells or platelets.Stem cells are undifferentiated but can be multilineage differentiated cells with self-renewal capability. Accordingly, stem cells have been frequently and widely used in clinics, especially in regenerative medicine, regarding their pleiotropic differentiating potential and immunomodulatory properties . GeneralNonetheless, stem cell therapies show several notable qualities regarding large-scale production, quality control, and off-the-shelf medicines . MoreoveAccording to previous reports, tumor cell-derived EVs (TEVs) have shown conflicting characteristics in terms of both promoting the aggressiveness of tumors and initiating anticancer immunity cycles. Zitvogel et al. reported the promising vaccine effects of TEVs as sources of tumor antigens for the first time . This stsEVs derived from tumor antigen-exposed dendritic cells have been considered to overcOther than DCs, some studies have evaluated the anti-tumor effects of sEVs from immune cells. For instance, NK cell line-derived sEVs (NK-EVs) have been reported to eradicate specific cancer cells through cytotoxic molecules such as tumor necrosis factor-\u03b1, perforin, granzyme, and the Fas ligand ,64,65. APlatelets, or thrombocytes, are anuclear cells produced in the bone marrow. They were once regarded as mere fragments of megakaryocytes. However, accumulated research has pointed out the important biological roles of platelets, including angiogenesis, hemostasis, and thrombosis . PlateleRed blood cell (RBC)-derived EVs (RBCEVs) have gained attention due to their safety and biocompatibility in clinical applications. For instance, RBCEVs have a lower risk of horizontal gene transfer, because they lack nuclear DNA and mitochondria. RBCEVs participate in important biological processes, such as nitric oxide homeostasis, redox balance, immunomodulation, and coagulation . HithertsEVs hold tremendous advantages in drug modalities, and many studies have leveraged engineered sEVs to deliver potent macromolecules, including proteins and genes, as shown in + T cell-derived sEVs with anti-VEGF antibodies can suppress angiogenesis and inflammation on choroidal neovascularization [The intracellular delivery of therapeutic cargo, including genes and proteins, is often unable to surpass the cell membrane. The usage of lipid nanoparticles (LNPs) received attention after the COVID-19 pandemic due to LNPs containing mRNA passing the lipid bilayer of the cell membrane . Howeverrization . The apprization . EVs witrization . The cherization . The incrization . The loarization , as wellrization and in brization , and it rization . The trarization , acute lrization , or prevrization and the rization . However12 to 1.22 \u00d7 106 sEV particles per injection, and sEV source cells are diverse, such as adipose-MSCs, bone-marrow-MSCs, and synovial fluid-MSCs.Since sEVs can recapitulate the comprehensive therapeutic potential of the donor cell, clinical trials utilizing MSC-derived sEVs are being extensively researched to evaluate the safety of treatment and efficacy on various diseases. The therapeutic dosage widely ranges from 1.2 \u00d7 10Recently, sEV treatments for COVID-19 and its complications are also being tested. Since most complications involve respiratory diseases, such as pneumonia and acute respiratory distress syndrome, not only intravenous injection but also the inhalation of sEVs is actively being tested . Hitherto, EVs across diverse cellular origins are undergoing clinical trials for a wide range of diseases\u2014from non-life-threatening hair loss to complex neoplasm diseases such as cancer, as shown in An sEV biodistribution (BD) evaluation should be conducted to create a new, effective class of medicines and to begin the first in-human studies. Some questions still need to be answered regarding the BD of sEVs. For example, how do the different administration routes affect sEVs\u2019 BD? What is the best labeling method for sEVs? To date, the most popular administration route for sEVs in preclinical studies is intravenous injection, occupying more than half of the total . Much evDespite many studies attempting to accurately assess the BD of sEVs using diverse labeling methods, the gold standard for labeling EVs has yet to be determined. The most widely utilized labeling approach is lipophilic fluorescent dye, including PKH, and diakylcarbocyanine dyes , which can be readily integrated into the membranes of sEVs. However, these lipophilic dyes may aggregate sEVs and cause background/pseudo signals. Moreover, these dyes eventually affect the composition of the surfaces of sEVs, leading to effects on the biological activity of sEVs . SeveralCompared to conventional therapeutics such as protein drugs, antibodies, and cell or gene therapeutics, there is no state of the art for the large-scale production of EV products, and there is also no concrete regulation or guidance from a regulatory board such as the FDA or EMA. Nonetheless, current EV therapeutics are actively being developed; thus, the establishment of manufacturing protocols and regulatory guidelines is needed. Since EVs are retrieved from na\u00efve or engineered cells, the overall production process is perceived as similar to that of cell or gene therapy products. The master cell banking process is required to collect sEVs to maintain the cell homology, such as surface molecule expression, intracellular content, and engineered traits. In terms of engineered EVs, ex vivo manipulation is the leading strategy. For instance, the cells can be transduced with a retrovirus, adenovirus, or lentivirus to express the desired EVs stably . The traThe MCBs or banked source cells are used for USP. The banked cells are thawed and expanded through the culture process. Cells are seeded on an appropriate culture dish, depending on the cell type. Alternative culture systems, such as 3D fiber cell systems, cell stacks, or seed trains, are used to increase the production of sEVs per cell, since these platforms improve cell viability and enable high-density cell growth. After adequate expansion, cells are transferred to a 3D bioreactor, an automated system optimized for cell growth. Once the cells are fully expanded, the culture medium is exchanged for serum-free media to inhibit soluble protein contamination and secure the purity of the final product. During downstream process development, serum-free medium is collected, and purified EVs are isolated. There are no set standard procedures, but most DSP resembles that of cell or gene therapeutic development. DSP focuses on collecting high-purity EVs with desired yields, appropriate for commercialization. Since the yield of the product and purity are trade-offs, the key in DSP development is to optimize both variables for a high-quality product with a practical outcome. The initial step is to remove the potential contaminants and collect small-sized EVs by depth filtration. Serial filtration, or depth filtration, sorts desired EVs from non-desired EVs through size cut-offs. This step is similar to the serial centrifugation process during the lab-scale production of EVs. Once the filtered EVs are retrieved, the product undergoes tangential flow filtration (TFF) to minimize the damage of EVs, maximize the purity, and concentrate the media into higher concentrations. The concentrated product undergoes a chromatography step to enhance the purity. Different types of chromatography columns are used, such as size exclusion chromatography or ion exchange chromatography, depending on the physical and chemical features of the EVs. Since most chromatography steps result in the dilution of the samples, the retrieved EVs often undergo TFF once more for concentration. The final drug products are packaged through fill-and-finish procedures. Though the gold standard of this step is also not yet established, many CDMOs and biopharmaceuticals lyophilize acquired EVs for higher stability and to facilitate storage and transport.Quality control (QC) tests are crucial for the clinical translation of sEV therapeutics. Protein- and small-chemical-based medicines should be verified as a homogenous population through several robust QC tests. However, the heterogeneous populations of sEVs cannot be converted into a homogenous population. Instead, the batch-to-batch consistency demonstrated by appropriate QC tests is the major priority for the GMP-grade manufacturing of sEV therapeutics. To prove the batch-to-batch consistency, we must establish a list of QC tests on final products (sEVs) with a sufficient scientific rationale to persuade the FDA or other regulatory authorities to approve human clinical trials. The International Society for Extracellular Vesicles (ISEV) suggested minimal requirements in the MISEV2018 guidelines for the quality control of sEVs [With the advancement of single-particle analysis technologies, quantifying the amount of therapeutic cargo loaded into a single particle is becoming feasible. Developed by NanoView Biosciences, the Exoview R200 automatically analyzes the EVs\u2019 number and size data through probed tetraspanin markers, including CD63, CD81, and CD9, by taking micro-biochip-based fluorescent images . FurthersEVs are naturally produced in our bodies and play vital roles in biological functions, with numerous advantages as a new class of medicines. Developing platform technologies and establishing therapeutic strategies that maximize the advantages of sEVs are considered the most significant paradigm shifts in creating new treatments. Although sEV therapeutics have not yet been approved and used in patients, numerous clinical trials based on sEVs have recently been attempted, and the numbers are constantly increasing. The large-scale manufacturing and QC of sEVs, which were previously inconceivable, have also made much progress in convincing regulatory authorities. There are still issues to be solved, but we expect that continuous technological development and research will establish innovative sEV-based treatments as promising therapeutic options to solve existing high unmet medical needs."} +{"text": "Case 1-Hip. A 29-year-old male was treated with pexidartinib prior to surgery, resulting in tumor reduction. A left total hip arthroplasty (THA) was then performed with a lack of recurrence in 12 months postoperative, and the patient currently on pexidartinib treatment. Case 2-Foot. A 35-year-old female, nearly a decade following a left foot mass resection, was treated with pexidartinib following disease recurrence. A decrease in soft tissue lesions at the midfoot and decreased marrow enhancement at the first metatarsal head were seen within 4\u20135 months of pexidartinib treatment; the patient is currently on pexidartinib (400\u2009mg/day) with improved symptom control. Case 3-Knee. A 55-year-old male patient received pexidartinib pre- and postoperatively. A reduction in swelling and the size of the popliteal cyst was significant and maintained, with the synovial disease growing when pexidartinib was discontinued. Surgery and adjuvant therapy eliminated the disease as of the last follow-up visit (11 months postoperative). These cases provide a unique perspective based on tumor location, type/timing of treatment strategy, and patient outcomes. Optimal treatment strategies for this debilitating disease may entail utilizing a combination approach (surgery+systemic treatment) to reduce surgical morbidity and the risk of postoperative disease recurrence.Tenosynovial giant cell tumor (TGCT) is a rare neoplasm of the joint synovium that has a wide clinical spectrum including pain and stiffness in the affected joint, joint swelling, periarticular erosions, and cartilage loss, which can severely impact quality of life. The mainstay treatment for TGCT has been surgery involving partial or total synovectomy using arthroscopic or open techniques. However, surgical resection alone is associated with high recurrence rates, particularly in diffuse-TGCT (D-TGCT) cases. The 3 cases presented here summarize a combination approach (surgery+pexidartinib [tyrosine kinase inhibitor]) in patients with previously unresectable or inoperable D-TGCT. Tenosynovial giant cell tumor (TGCT) is a rare, usually benign neoplasm derived from the synovium. TGCT affects joints, bursae, and tendon sheaths causing symptoms that include pain, inflammation, and joint stiffness \u20134. This Recently, systemic treatment with tyrosine kinase inhibitors (TKIs) or monoclonal antibodies targeting the colony-stimulating factor-1 receptor (CSF1R), i.e., imatinib, nilotinib, emactuzumab, cabiralizumab, and pexidartinib, have been utilized with encouraging results in cases not amenable to surgery , 16. TheA 29-year-old male with no past medical history developed new onset left-lateral hip pain while running in August 2019. After an unsuccessful trial of physical therapy, an MRI was obtained in December 2019 that showed a lobulated mass anteriorly and posteriorly around the left hip extending into the left pelvis anteromedial to the iliacus musculature and posteriorly into the ischiofemoral space Figures . A repreAfter image-guided biopsy, histology of the neoplasm was consistent with D-TGCT. Sections of the tumor from the pretreatment biopsy demonstrated a cellular lesion composed of mixed inflammatory cells including macrophages with hemosiderin pigment and abundant multinucleated giant cells, as well as collections of foamy histiocytes and lymphocytes . In JanuAt a follow-up visit in December 2020, MRI revealed a reduction in tumor size, with representative tumor measurements performed at the level of the external iliac vessels of 5.0\u2009cm \u00d7 2.1\u2009cm and at the ischiofemoral space of 4.3\u2009cm \u00d7 2.0\u2009cm Figures , specifiIn April 2021, a left total hip arthroplasty (THA) was performed successfully though a posterior approach, resecting all accessible tumors. A 6\u2009cm \u00d7 6\u2009cm \u00d7 5\u2009cm infiltrative brown specimen was removed during the posterior approach to the hip capsule, including an additional tumor that was removed from around the posterior acetabulum; severe arthritic changes to the hip joint were observed. Histologic review of the resected specimen (post-pexidartinib treatment) demonstrated a mixed population of cells. However, compared to the pretreatment sample, the tissue was less cellular, the foamy histiocytic component was increased, and the number of multinucleated giant cells were significantly decreased .The anterior pelvic disease was not pursued at the same time given the elevated risks of hip instability and infection associated with dual surgical approaches. Neoadjuvant pexidartinib treatment (400\u2009mg twice daily) was resumed the following month (May 2021) to treat the residual anterior intrapelvic disease. At the last orthopedic follow-up in April 2022, the patient had no hip pain, was ambulating unassisted, and exercising comfortably. Follow-up X-rays demonstrated a well-sized and fixed cementless total hip arthroplasty Figures . SurveilThe patient was a 35-year-old female with no past medical history who initially presented in 2008 at age 22 with foot swelling and pain. A radiograph in 2009 showed multiple low-signal intensity masses with diffuse heterogeneous enhancement about the joints of the midfoot extending approximately 5.8\u2009cm over the dorsal aspect of the foot from the base of the metatarsals over the head of the talus. Along the dorsal aspect of the navicular bone, the mass measured approximately 9\u2009mm medially. The thickest portion of the mass dorsally measured approximately 1.4\u2009cm overlying the cuneiforms.In January 2010, the patient accepted surgery, and a resection of the left foot mass was performed. Nearly 9 years later (December 2018), the patient was symptomatic again with foot swelling and pain, and radiographs showed osseus destructive changes of the first, second, and third cuneiforms and cuboid, representing recurrent disease. In addition, a large bulky soft tissue mass associated with the fourth and fifth metatarsal was observed as 3.78\u2009cm proximal-distal, 1.13\u2009cm medial-lateral, and 2.5\u2009cm dorsal-plantar. The mass was at low-signal intensity on T1 and T2, compatible with D-TGCT. At that point, the patient was considered a poor candidate for surgery, declined systemic treatment (imatinib and pexidartinib), and received three cortisone injections directly into the lesion between March 2019 and September 2019.In March of 2019, a core needle biopsy showed D-TGCT. Four months later at a follow-up visit in July 2019, an ill-defined soft tissue mass with associated erosions in the midfoot measuring 3.0\u2009cm \u00d7 2.0\u2009cm \u00d7 2.8\u2009cm was observed in the radiographs Figures .In September 2019, the patient had an Eastern Cooperative Oncology Group performance status score of 0 and was able to work with pain while taking naproxen. One month later, radiographs showed an erosion with associated edema and enhancement involving the medial first metatarsal head, and gout is also within the differential; similar-appearing erosions with associated soft tissue mass involving the midfoot were also observed. The mass measured 3.0\u2009cm in the anterior-posterior dimension and 2.0\u2009cm \u00d7 2.8\u2009cm in transverse dimension Figures .In October 2019, pexidartinib was started at 400\u2009mg daily. After 11 days of treatment, pexidartinib was held for nearly 3 weeks due to a grade 3 rash and then restarted at 200\u2009mg daily. A week later, pexidartinib was increased to 400\u2009mg daily (for 9 days) and 600\u2009mg daily thereafter.In March 2020 at a follow-up visit, an MRI Figures showed dIn January 2021, the MRI Figures showed nCurrently, the patient continues to have waxing and waning side effects and has been able to decrease the cognitive impairment by intermittently holding the dose. The dose of pexidartinib is back up to 400\u2009mg daily for improved symptom control, and the dose is being titrated based on symptoms and side effect profile.A 55-year-old male patient initially presented 2 years prior (at age 53 in 2017) with painful knee swelling and radiographs showing loose bodies. The patient was treated by aspirations of the cyst followed by subsequent cortisone injections, resulting in short-term improvement. The cyst returned and was more symptomatic with increased stiffness and pain in 2019. In October 2019, a left knee MRI showed a large fluid collection, a Baker's cyst (13\u2009cm), and a moderate effusion that was partially ruptured inferiorly. Three weeks later, a left knee arthroscopy with total synovectomy confirmed D-TGCT Figures . A popliAt a follow-up visit in June 2020, radiographs revealed persistent D-TGCT that increased, surrounding the cruciate ligaments. A 3.4\u2009cm \u00d7 2.0\u2009cm mass in the medial gastrocnemius, doubling of the thickened synovium around the cruciate ligaments, tibial articular erosions, and mild degenerative changes were observed, in addition to a slight reduction of the popliteal cyst, which measured at 11.5\u2009cm \u00d7 5.6\u2009cm \u00d7 0.4\u2009cm.One month later (July 2020), the patient started on pexidartinib at 200\u2009mg twice daily, which resulted in reduced pain and swelling. Treatment was halted after 5 weeks due to grade 4 neutropenia. Sections from the histological examination of the resection specimen show that the hypercellular region of the popliteal tumor is similThe tumor grew, and symptomatic swelling, stiffness, and pain increased while awaiting hematologic normalization. White blood cell counts were restored, and surgery was performed in December 2020. Specifically, the left posterior knee synovectomy showed D-TGCT (9.7\u2009cm \u00d7 7.5\u2009cm \u00d7 4.3\u2009cm in aggregate). The tumor showed areas of increased cellularity and increased mitotic activity (up to 19 mitoses/10\u2009hpfs). Areas of fibrosis and necrosis were consistent with therapy-related changes (estimated as 30% of the mass). The left anterior knee synovectomy also showed D-TGCT (10.0\u2009cm \u00d7 9.0\u2009cm \u00d7 3.0\u2009cm in aggregate) with focal areas of fibrosis consistent with therapy-related changes (~10%). There was no recurrence following surgery, and follow-up radiographs from January 2021 did not show a tumor.The next month (February 2021), the patient restarted adjuvant therapy with pexidartinib at 200\u2009mg twice daily for a planned 2 months. At a follow-up visit in April 2021, radiographs demonstrated extensive postoperative changes and edema but no recurrence of TGCT. No tumor was diagnosed; however, a small area along the lateral tibial plateau was reported as scar, blooming effect, representing postoperative metal artefactual particles or focal residual disease.In October 2021, the patient suffered a tear of the anterior cruciate ligament, and radiographs showed effusion plus surgical changes around the posterior tibial border. A new rupture of the anterior cruciate ligament and thinning of the posterior cruciate ligament were observed; there was no evidence for recurrent TGCT Figures .In cases in which surgery is contraindicated or presents a high morbidity risk, or those in which the tumor is unresectable, systemic treatment can be employed with the goal of reducing the size of the tumor and improving these patients' symptoms and quality of life. In an additional treatment pathway illustrated by these case examples, systemic treatment can also be utilized as a complementary treatment method to downstage the tumor turning patients who had unresectable disease into reasonable surgical candidates; suggesting a multidisciplinary approach can be beneficial in treatment of D-TGCT.Pexidartinib is a selective CSF1R inhibitor that targets the CSF1/CSF1R pathway involved in the pathogenesis of TGCT . AdminisIn our cases, pexidartinib was administered in combination with surgery at different stages . For the case in which D-TGCT was presented in the hip, the patient was treated with pexidartinib prior to surgery, resulting in a reduction of tumor volume size. Thereafter, a successful THA was performed to remove all the accessible tumors and address the severe arthritis. As of the last follow-up visit, there remains a lack of recurrence (currently 11 months postoperative), and the patient is currently on pexidartinib treatment.Regarding the patient who presented with disease in the foot, nearly a decade following surgery (resection of the left foot mass), the disease recurred and was treated with pexidartinib, as surgery would result in midfoot fusion and high likelihood of postsurgical morbidity. A high rate of complication and recurrence has been observed in the foot and ankle . For thiFor the case showing D-TGCT in the knee, the patient received pexidartinib prior to surgery and postoperatively. It was challenging to measure the extent of the disease due to multiple contributors, including a complex popliteal cyst with debris and the presence of degenerative arthritis changes. Episodic treatment is possible, and responses in the neoplastic and inflammatory components of the disease can be observed and may be discordant. A reduction in swelling and the size of the popliteal cyst was remarkable and sustained due to reduction of the inflammatory aspect of the disease, with the synovial disease growing when drug therapy was discontinued. Ultimately, the surgery and adjuvant therapy eradicated the disease as of the last follow-up (October 2022). The patient was seen in October 2022, responding to physical therapy with greater strength and range of motion, less pain, and no documented recurrence on a new MRI.After baseline intervention, these cases were followed with clinical and radiological MRI follow-up occurring approximately 3 months later. This allowed for evaluation of the success of therapy and offered a reference point for additional follow-ups. Thereafter and in accordance with current recommendations [The optimal treatment strategy in patients with D-TGCT is currently evolving. As surgery can result in a high recurrence rate and is associated with surgical morbidity , utiliza"} +{"text": "Litomosoides sigmodontis, a rodent filaria residing in the pleural cavity was therefore used to characterize pleuropulmonary pathology and associated immune responses in wild-type and Th2 deficient mice. Wild-type and Th2-deficient mice (-/-/Il-5-/-Il-4r\u03b1) were infected with L. sigmodontis and parasite outcome was analyzed during the patent phase . Pleuropulmonary manifestations were investigated and pleural and bronchoalveolar cells were characterized by RNA analysis, imaging and/or flow cytometry focusing on macrophages. -/-/Il-5-/-Il-4r\u03b1 mice were hypermicrofilaremic and showed an enhanced filarial survival but also displayed a drastic reduction of microfilaria-driven pleural cavity pathologies. In parallel, pleural macrophages from -/-/Il-5-/-Il-4r\u03b1 mice lacked expression of prototypical alternative activation markers RELM\u03b1 and Chil3 and showed an altered balance of some markers of the arginine metabolic pathway. In addition, monocytes-derived F4/80intermediate macrophages from infected -/-/Il-5-/-Il-4r\u03b1 mice failed to mature into resident F4/80high large macrophages. Altogether these data emphasize that the presence of both microfilariae and IL-4R/IL-5 signaling are critical in the development of the pathology and in the phenotype of macrophages. In -/-/Il-5-/-Il-4r\u03b1 mice, the balance is in favor of parasite development while limiting the pathology associated with the host immune response.Filarial parasites are tissue dwelling worms transmitted by hematophagous vectors. Understanding the mechanisms regulating microfilariae (the parasite offspring) development is a prerequisite for controlling transmission in filarial infections. Th2 immune responses are key for building efficient anti-parasite responses but have been shown to also lead to detrimental tissue damage in the presence of microfilariae. The close relationship of filariae with their hosts has generated a complex balance between the host immune system, the induced pathology and the survival/transmission of the parasite. A modified immune response of the host may result in an enhanced or reduced survival of the parasite, altered ability to transmit the offspring, and can exacerbate or diminish the parasite-induced pathology. Any combination of the two components, parasites development and host pathology, is possible. Thus, in some cases although parasite survival is enhanced and parasite load is higher, parasite-induced pathology is reduced.Litomosoides sigmodontis has been extensively used to analyze the contribution of prototypic Th2 cytokines IL-4, IL-13 and IL-5 but also IFN-\u03b3 through the course of filarial infection mice and all animals develop microfilaremia, in comparison to around 50% in WT mice and thus lacking IL-4/IL-13 signaling, leading to an absence of alternative activation of macrophages. Lack of IL-5 additionally impairs the maturation and recruitment of eosinophils and fibrosis or by the degranulation of their granules (cytotoxic molecules or enzymes) , 15\u201320. The anti-helminth qualities of macrophages are also well-documented. However, the mechanisms they employ to promote the killing of filariae are not fully elucidated. Macrophages have been directly involved in microfilaria killing through nitric oxide (NO) production \u201326. TheyL. sigmodontis infection that are maintained at homeostasis through self-renewal, independently of adult hematopoiesis \u201337. Macrnfection . Indeed,rophages .i.e. the metabolism via arginase or nitric oxide synthase and we further analyzed pleural and bronchoalveolar macrophage subsets to compare tissue-specific phenotypes during the patent phase of L. sigmodontis infection in low microfilaremic immunocompetent versus hypermicrofilaremic -/-/Il-5-/-Il-4ra BALB/c mice.Here, we investigated the two competing arginine pathways in macrophages, L. sigmodontis Chandler, 1931 and isolation of infective larvae (L3) from the mite vector, Ornithonyssus bacoti, were carried out as previously described . All mice were maintained and bred in the MNHN facilities on a 12-hour light/dark cycle. 6-8 weeks-old female mice were inoculated subcutaneously in the neck with a single dose of 40 L3.Maintenance of the filaria escribed , 40. BALMice were sacrificed at 50 and 70 days post-inoculation (dpi). Filariae were collected with pleural cells by flushing the pleural cavity 10 times with 1ml cold phosphate buffered saline (PBS) as described in . LikewisFilariae were counted, sexed and measured under a binocular microscope.After performing the bronchoalveolar lavage, lungs were exsanguinated. For this purpose, lungs and heart were removed from the thoracic cage and placed in a petri dish. A 23G needle was then inserted into the right ventricle of the heart to allow blood to flow out and 10ml of cold 1X PBS was injected into the left ventricle using another 23G needle. Post-exsanguination, lungs were inflated by injecting 1ml of ethanol 70% through the trachea. Lungs were preserved in ethanol 70% for microfilariae quantification by qPCR.Peripheral, cardiac and pleural microfilariae were quantified at 70 dpi in a 10\u00b5l drop of blood or 10\u00b5l of the first ml of pleural fluid stained with Giemsa. Microfilariae present in the lung exsanguination fluid collected in a petri dish were transferred to a 15ml tube and centrifuged . Red blood cells were removed by hypotonic lysis and microfilariae were diluted in 200\u00b5l of PBS and counted.\u00ae Glasstic Slide).To isolate microfilariae from the general circulation, the protocol described in , 42 was Lung DNA was extracted to quantify pulmonary microfilariae as described in . First, \u00ae No-ROX Kit (Bioline) in a LightCycler 480 (Roche Diagnostics) with an initial incubation of 10min (95 \u00b0C), 40 amplification cycles of 10 s (95 \u00b0C), 5 s (60 \u00b0C), and 10 s (72 \u00b0C), during which fluorescence data were collected. Filarial and murine DNA were detected by targeting \u03b2-actin of L. sigmodontis and \u03b2-actin of Mus musculus respectively. For each sample, the ratio (R) of signal (CT) from filarial and murine \u03b2-actin was performed to normalize the results as R = CT (L.s. actin)/CT (M.m. actin).Lungs from infected mice were homogenized in 500\u00b5l of PBS using a Tissue Lyser II (Qiagen). 100\u03bcl of homogenate solution was used for genomic DNA extraction according to the manufacturer\u2019s protocol and finally eluted in 150\u03bcl of sterile water. A real-time PCR was performed with the SensiFAST TM SYBR-/-/Il-5-/-Il-4r\u03b1 BALB/c mice was extrapolated using this ratio and the standard curve.The number of microfilariae in the lung of infected WT and Pleural and bronchoalveolar cells were washed twice in PBS prior to staining with LIVE/DEAD (Life Technologies) for 30\u00a0min at RT. Samples were blocked with murine Fc block CD16/CD32 before surface staining (20min on ice) with various specific fluorochrome-conjugated antibodies (see Supporting Information Table I for list of antibodies). For intracellular staining, samples were washed, permeabilized and stained for intracellular RELM\u03b1 for 30min.+ cells (gating strategy on Fluorescence Minus One (FMO) controls were used for each group with a pool of cells from all mice in the group. Cells were analyzed on a FACSVerse (BD Biosciences). Data was analyzed with FlowJo (FlowJo LLC). In order to compare macrophages population dynamics, samples were concatenated and a t-distributed stochastic neighbor embedding (tSNE) was performed on F4/805 for nitric oxide (NO) measurement and 5x105 for arginase quantification), 6-well plates (10x105) or 8-well Labtek chambered slides (Thermo Scientific) (1x105), in RPMI, HEPES 25mM, 10% FCS, 1% penicillin/streptomycin and 2mM glutamine. Cells were allowed to adhere on the substrate for 2h . Nonadherent cells were removed by gentle washing three times with warm PBS.Pleural cells were distributed in 24-well plates , lysed in Trizol (for subsequent RNA extraction) or detached (0.5ml Trypsin EDTA 1X during 5min at 37\u00b0C), counted and further cultured for 16h (NO measurement).2. Supernatants were collected and frozen at -20\u00b0C for quantification of collagen. 24h-cultured macrophages were lysed and frozen at -20\u00b0C for arginase activity.For arginase activity and quantification of collagen macrophages were cultured 24h in Iscove\u2019s modified DMEM at 37\u00b0C, 5% COThe macrophage purity was checked by flow cytometry using an anti-F4/80 antibody and was more than 90% was added to each well and incubated for one hour at 37\u00b0C. The plate was then centrifugated at 2 000g for 10 minutes and supernatants were removed. 100\u00b5l of absolute ethanol was added to each well and incubated for 2 minutes. The plate was centrifugated again for 10 minutes at 2 000g and supernatants removed. Pellets were resuspended in 200\u00b5l of 0.5M NaOH solution and incubated for 30 minutes at 37\u00b0C in the dark. The absorbance was read at 540nm and results were calculated with a two-fold dilution standard curve of collagen from 1mg/ml to 0.0078mg/ml.5 macrophages/well/200\u00b5l) and stimulated with 20\u00b5g/ml IFN-\u03b3 or 10\u00b5g/ml filarial antigen. For L. sigmodontis antigen, female adult filariae were rinsed in PBS and homogenized in 500\u00b5l PBS using a Tissue Lyser II (Qiagen) for 1 minute at 30Hz, twice. The homogenate solution was sonicated in an ice bath at 40% of amplitude during 2 x 5 cycles of 10 seconds sonication with 10 seconds rest intervals. Insoluble material was removed by centrifugation at 300g for 10\u00a0min and 4\u00b0C (2-). Briefly, equal amounts (100\u03bcl) of cell supernatants and Griess reagent were blended and incubated . The optical density values of assay mixture were obtained by reading the absorbance at 540nm with a microplate reader (Labsystems Multiskan MS). Nitrite content was determined from a calibration curve plotted with a series of known concentrations of sodium nitrite (\u00b5M).After inactivation of trypsin and washing with PBS, macrophages were resuspended in phenol-free RPMI, 10% FCS, 1% penicillin/streptomycin and 2mM glutamine and counted. Then macrophages were cultured for 16h in 96-well round bottom plates in triplicate and incubated for 10 minutes at 55\u00b0C. 25\u00b5l of each well were transferred in a new 96-well round bottom plate, 25\u00b5l of substrate solution were added and incubated for one hour at 37\u00b0C. 50\u00b5l of each well were transferred in a new 96-well round bottom plate, 200\u00b5l of reagent from the QuantiChrom urea assay kit were added and incubated 20 minutes at room temperature in the dark. Absorbance was read at 520nm and results were calculated according to manufacturer\u2019s instructions.50\u00b5l of lysed macrophages were transferred in a new 96-well round bottom plate, 50\u00b5l of activation solution were added (10mM MnClA solution containing 5 \u00b5L of pHrodo\u2122 Red E. coli BioParticles\u2122 Conjugate (Invitrogen) in 100\u00b5L of complete medium was added to macrophages for 30\u00a0min . Then 50\u00b5L of a solution containing fluorescently conjugated antibodies (1/400) and Hoechst (1/1000) was added; macrophages were further incubated for 30\u00a0min. Antibodies were the following: anti 1A/1E-BV421 , anti CD11c-AF594 , anti CD11b-PE , and anti F4/80-APC . Macrophages were then washed twice with 100\u00b5L PBS to remove excess of particles and antibodies and fixed by adding 100\u00b5L of 4% PFA for 5min. Samples were washed twice with PBS then the wells from the Labtek slides were removed and samples were coverslipped using VectaMount (Vector Laboratories).Slides were imaged on a Zeiss LSM880 confocal microscope as previously described . Acquisi+F4/80+ cells.For cell segmentation, fluorescence signals from the membrane markers were summed and the Cellpose algorithm was appl2O. Finally, samples were incubated and then kept on ice. RNA was quantified with a Qubit Fluorometer (Thermo Scientific). After treatment with DNase, complementary DNAs (cDNAs) were synthesized with Superscript\u00ae IV reverse Transcriptase (Thermofisher). Transcripts of genes implicated in immune resolving and/or tissue repair were quantified by qRT-PCR with the LightCycler 480 II system and specific primers (see Supporting Information Table II for list of primer used). PCR amplification was analyzed by the E-\u0394\u0394CT method and expression of the gene of interest was normalized by the expression of housekeeping genes \u03b2-glucuronidase and \u03b2-actin.RNA extractions were performed with a phenol-chloroform solution. 200\u03bcl of chloroform was added in the Trizol solution containing cells and the sample was incubated before being centrifuged . The aqueous phase containing RNA was transferred to a new tube and 500\u03bcl of 100% isopropanol was added. After 10min at RT, samples were centrifuged for . The RNA pellet was resuspended in 1ml of 75% ethanol and centrifuged prior to drying for at least 30min. Then RNA was eluted on ice with 40\u03bcl of RNase-free H-/-/Il-5-/-Il-4r\u03b1). Specific numbers of animals can be found in corresponding figure legends. Sketches were made using the Servier Medical Art image bank (https://smart.servier.com).Representation and data analysis were performed with Prism 9.0 software (GraphPad Inc.). Data from independent experiments were pooled when possible. Data of microfilaremia were analyzed with a Student\u2019s t-test. All other results were analyzed by two-way ANOVA test to determine the effect of factors (group and/or time), followed by a Bonferroni\u2019s multiple comparisons post-test when test application conditions were met . Otherwise a log or square transformation (depending on the skewness of the distribution of the variable) has been performed before the two-way ANOVA analysis. In all figures, the mean value is visually depicted. P values correlate with symbols as follows: *p<0.05, **p<0.01, ***p<0.001 represent differences between infected groups ; $p<0.05, $$p<0.01, $$$p<0.001 represent differences between mice strains in microfilaremic mice and gerbils and Mrc1 (CD206) was determined by qRT-PCR to control mice whereas those from WT infected mice had high levels of IFNgR1 stained for Ly6C, MHCII and Siglec-F was performed on concatenated samples and, in addition to the impairment of Th2 and arginase pathways . These newly arrived monocyte-derived alveolar macrophages were shown to transiently express MHCII and low-intermediate levels of Siglec-F . When stdblGata) , 58, 59 dblGata) similar dblGata) . EosinopdblGata) , 61\u201365 odblGata) are impo-/-/Il-5-/-Il-4r\u03b1 mice. We therefore decided to explore how macrophages are affected in the altered Th2 context of -/-/Il-5-/-Il-4r\u03b1 mice and whether they could also participate to the initiation and maintenance of tissue fibrosis . Arginase 1 expression is induced by anti-inflammatory signals such as IL-4, IL-13, IL-10 or TGF\u03b2 , 66, 67,ted mice , 68, 69.ted mice , 48, 66.n status , 71.in vitro when stimulated with IFN-\u03b3, suggesting that NO must be produced in vivo in the pleural cavity. Previous reports indicated that even if NO can kill Mfs in vitro, inhibition of nitric oxide synthesis or use of NOS-deficient mice did not affect microfilaremia in vivo , suggesting that a competent Th2 signaling in pleural macrophages is important for inducing inflammatory responses. However, when cultured with bacterial bio-particles, Th2-deficent macrophages displayed increased phagocytosis capacities compared to their WT counterparts. Live microfilariae and adult worms are too big and fast-moving to be directly phagocytosed (-/-/Il-5-/-Il-4r\u03b1 mice does not confer a competitive advantage.Through the action of the nitric oxide synthase (NOS), arginine can also be metabolized to nitric oxide (NO) and citrulline , 48, 66. in vivo , 76. How in vivo , 78. NO ocytosed . It is t-/-/Il-5-/-Il-4r\u03b1 mice could therefore be a reason for decreased pathology in these animals. Arg1 and NOS2 compete for L-arginine arginine, so a disequilibrium towards one side could result in the higher Arg1 activity in -/-/Il-5-/-Il-4r\u03b1 mice. Interestingly, a recent report in a pulmonary nematode infection (Nippostrongylus brasiliensis) showed that alveolar macrophages can mediate parasite killing by locally deleting L-arginine through Arg1 and small F4/80intermediate monocyte-derived macrophages (MoMac), the latter being able to replenish the F4/80high pool ResMac was similar, the ResMac population was almost absent in -/-/Il-5-/-Il-4r\u03b1 mice. This suggests that, in absence of potent Th2 immune responses, MoMacs fail to mature into ResMac. The absence of difference in na\u00efve mice could be due to the fetal origin of the initial pool of pleural macrophages were ne-/-/Il-5-/-Il-4r\u03b1 mice highlights the subtle balance necessary to control infections while maintaining tissue homeostasis are the most efficient cells to kill adult parasites and microfilariae. -/-/Il-5-/-Il-4r\u03b1 mice, as they lack both eosinophils and ResMac therefore allow an exceptional survival, growth and reproduction of the parasite. We need a holistic view of filarial infection to better understand the pathogenesis mechanisms and target specific pathways to maintain tissue integrity while allowing efficient parasite killing.As a conclusion, the infection of The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.All experimental procedures were carried out in accordance with the EU Directive 2010/63/EU and the relevant nationallegislation, namely the French \u201cD\u00e9cret No. 2013-118, 1er f\u00e9vrier 2013, Minist\u00e8re de l\u2019Agriculture, de l\u2019Agroalimentaire et de la Foret\u201d. Protocols were approved by the ethical committee of the Museum National d\u2019Histoire Naturelle and by the Direction D\u00e9partementale de la Coh\u00e9sion Sociale et de la Protection des Populations (DDCSPP) (No. D75-05-15).CM, FF and ER contributed to conception and design of the study. Investigation: ER, JG, JR, SC, NL-V, JA, FF and CM. Formal analysis: ER, JG, and CM. Statistics: ER, JG and CM. Writing \u2013 original draft: ER, JG and CM. Writing \u2013 review & editing: ER, JG, LK, MH, FF and CM. All authors contributed to manuscript revision, read, and approved the submitted version. All authors contributed to the article and approved the submitted version.Core funding from the Museum National d\u2019Histoire Naturelle. European Community grant H2020-EU.3.1.3.-HELP-815628. French Agence Nationale de la Recherche (ANR) grant, Project WOLF (ANR-21-CE13-0029).We thank Geraldine Toutirais from the MNHN Electron Microscopy facility for assistance with SEM imaging. We thank Cyril Willing from the MNHN light microscopy facility for assistance with confocal imaging.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher."} +{"text": "Plasmodium vivax blood-stage invasion into reticulocyte is critical for parasite development. Thus, validation of novel parasite invasion ligands is essential for malaria vaccine development. Recently, we demonstrated that EBP2, a Duffy binding protein (DBP) paralog, is antigenically distinct from DBP and could not be functionally inhibited by anti-DBP antibodies. Here, we took advantage of a small outbreak of P.vivax malaria, located in a non-malarious area of Brazil, to investigate for the first time IgM/IgG antibodies against EBP2 and DEKnull-2 (an engineering DBPII vaccine) among individuals who had their first and brief exposure to P.vivax (16 cases and 22 non-cases). Our experimental approach included 4 cross sectional surveys at 3-month interval (12-month follow-up). The results demonstrated that while a brief initial P.vivax infection was not efficient to induce IgM/ IgG antibodies to either EBP2 or DEKnull-2, IgG antibodies against DEKnull-2 (but not EBP2) were boosted by recurrent blood-stage infections following treatment. Of interest, in most recurrent P. vivax infections (4 out of 6 patients) DEKnull-2 IgG antibodies were sustained for 6 to 12 months. Polymorphisms in the ebp2 gene does not seem to explain EBP2 low immunogenicity as the ebp2 allele associated with the P.vivax outbreak presented high identity to the original EBP2 isolate used as recombinant protein. Although EBP2 antibodies were barely detectable after a primary episode of P.vivax infection, EBP2 was highly recognized by serum IgG from long-term malaria-exposed Amazonians . Taken together, the results showed that individuals with a single and brief exposure to P.vivax infection develop very low anti-EBP2 antibodies, which tend to increase after long-term malaria exposure. Finally, the findings highlighted the potential of DEKnull-2 as a vaccine candidate, as in non-immune individuals anti-DEKnull-2 IgG antibodies were boosted even after a brief exposure to P.vivax blood stages. P. vivax has been focused on region II of the Duffy binding protein (DBPII), a ligand for human blood-stage infection. Recently, the newly described Erythrocyte binding protein 2 (EBP2), a P.vivax DBP paralog that it is antigenically distinct from DBP, was identified as potential vaccine targets. To date, scarce data are available about the naturally acquired immunity to EBP2. In a small outbreak of P.vivax malaria, located in a non-malarious area, we investigated whether a first P.vivax exposure induces antibodies against EBP2 that could be boosted by P.vivax recurrent infections. In parallel, we included an engineered DBPII vaccine (named DEKnull-2) whose antibody response were previously associated with broadly neutralizing P.vivax antibodies. This study shows EBP2, compared with DEKnull-2, was poorly immunogenic among individuals who experienced their first blood-stage P. vivax malaria infection. However, EBP2 was highly immunogenic in long-term malaria exposed individuals, reinforcing its potential as a P. vivax blood-stage vaccine candidate. Finally, our results reinforce that multiple blood-stage antigens should be targeted for the development of efficient vaccines against P. vivax.Vaccines might be a crucial component of the current efforts to malaria control and elimination, and much of the vaccine-related research on Plasmodium parasite that infect humans, Plasmodium vivax is the most widespread outside the African continent [P. vivax, drug resistance, and recurrent relapses by reactivation of liver stage hypnozoites are causes for concern [Malaria remains a major public health concern despite all efforts for control. The World Health Organization registered 241 million malaria cases in 2020 with an estimated 12% increase in death rate compared to 2019 [ concern . TherefoPlasmodium spp merozoite is a multistep process mediated by molecular interaction between erythrocyte receptors and parasite ligands, and it is essential for parasite development [P. vivax, the leading blood-stage vaccine candidate, the Duffy binding protein , is involved in the interaction between the parasite and its receptor on reticulocytes, the Duffy antigen/ receptor for chemokines (DARC) [P. vivax DBP paralog, and novel member of Erythrocyte binding-like family, termed Erythrocyte binding protein 2 (EBP2), was identified in field isolates [high) reticulocytes [P.vivax invasion pathway.The invasion of the erythrocytes by elopment \u201310. Thuselopment \u201314. For s (DARC) \u201317. Alths (DARC) \u201322, highs (DARC) ,23. Receisolates . EBP2 shisolates but it iisolates . Of inteulocytes , which sP. vivax antigens on the reduced risk of vivax malaria in children from Papua New Guinea (PNG) [P. vivax malaria in children from PNG was associated with the antibodies against EBP2 [A recent study investigated the potential existence of synergistic or additive effects of combinations of antibody responses to a panel of 38 ea (PNG) . The resea (PNG) . More innst EBP2 , howevernst EBP2 \u201330, withnst EBP2 ,32. To onst EBP2 .P. vivax blood-stage, we investigated here whether a first P. vivax exposure is able to induces antibodies against EBP2 and DEKnull-2, and if these responses could be boosted by P. vivax relapses/recurrence. This study took advantage of an outbreak of P. vivax malaria, in a non-endemic area in Brazil. We demonstrated that EBP2 was poorly immunogenic among individuals who experienced their first blood-stage P. vivax malaria infection compared with DEKnull-2. However, EBP2 was shown to be highly immunogenic in long-term malaria exposed individuals.Scarce data are available on naturally acquired immunity to the newly described EBP2, most of which are restricted to Southeast Asian ,27,34. Chttp://www.fiocruz.br/biosseguranca/Bis/manuais/biosseg_manuais.html).The ethical and methodological aspects of this study were approved by the Ethical Committee of Research on Human Beings of the Institute Ren\u00e9 Rachou / FIOCRUZ Minas . The study participants were informed about the aims and procedures of the study and voluntary participation solicited and agreed with voluntary participation through written formal consent. For the child participants, the written formal consent was obtained from the parent/guardian. The current study was conducted according to Laboratory biosafety and biosecurity policy guidelines of the Oswaldo Cruz Foundation in 2003, with the last malaria case diagnosed on 21 May 2003; since then, local/regional of Minas Gerais Departments of Health had maintained entomological and epidemiological surveillance of the area until the end of 2003. The entomological surveys incriminated the vector Anopheles darling as responsible for local malaria transmission [P. vivax infection by microscopy (Case). In that time, all patients were promptly treated with chloroquine (1.5 g for 3 days) plus primaquine (30 mg daily for 7 days) and followed-up. In the case of relapses and/ or recrudescence, a second round of treatment was administered (3-day course of chloroquine and a 15-day course of primaquine). Sixteen out of 25 cases were enrolled in the current study, and 6 out of the 16 (38%) cases experienced one or two recurrent P. vivax infections. All P. vivax recurrent infection were confirmed by thick blood smears and DNA sequencing of a single dbpII allele [P. vivax transmission but did not develop blood-stage infection were included as non-Cases from a rural community of the Amazon rain forest were included in the current study .To evaluate the influence of time on malaria exposure and acquisition of naturally antibody response to recombinant nt study . The detEBP2. Recombinant EBP2, which includes amino acids 159\u2013485 from C127 Cambodian isolate [Escherichia coli, cloned into pET21a vector, with a C-terminal 6xHis tag. After expression, recombinant EBP2 was purified from inclusion bodies by affinity chromatography using Ni+ Sepharose 6 fast flow (GE Lifesciences), and refolded by a rapid dilution, resulting in a 37 kDa protein as previously described [ isolate , was codescribed .DBPII-based antigens. Recombinant engineered vaccine DEKnull-2 [P. vivax Duffy binding protein (DBPII) (243aa\u2013573aa), and DBPII Souza isolate from the outbreak (DBPII-outbreak) [E. coli as 39 kDa protein fusion with 6xHis tag and purified as previously described [EKnull-2 , based outbreak) were expescribed ,38.MSP1-19. The 19-kDa C-terminal fragment of the Merozoite Surface Protein-1 of P. vivax (MSP1-19), which represents amino acids 1616\u20131704 of the full-length MSP-1 polypeptide, was expressed as a 6xHis tag fusion protein and purified as described previously [eviously .P. vivax proteins was previously titrated, and defined as 1.5\u03bcg/ml for EBP2, 3\u03bcg/ml for DBPII-outbreak and DEKnull-2, and 1\u03bcg/ml for MSP1-19. Plasma samples were diluted at 1:400 and 1:100 for IgM and IgG, respectively. Peroxidase-conjugated IgM and anti-IgG were used as secondary antibody at 1:5000 dilution. Results were expressed as ELISA reactivity index (RI) for each protein, calculated as the ratio of the mean optical density (OD at 492nm) of sample to the mean OD plus three standard deviations of 20\u201330 unexposed volunteers. Values of RI > 1.0 were considered positive.Plasma IgM and IgG antibodies level were measured by conventional ELISA ,41. Brieebp2 gene of P. vivax from the outbreak isolate were designed using Cambodian field isolate (C127) as a reference (accession number: KC987954) [ebp2 (979 bp) and to obtain high quality of the full ebp2 sequence, we designed three sets of overlapping primers that covers a region beyond the DBL of EBP2, corresponding to nucleotides 201 to 1618 (aa 68\u2013535) (http://www.mbio.ncsu.edu/BioEdit/bioedit.html) and Chromas version 2.6.6 (http://technelysium.com.au/wp/chromas/). A total of six reads of each fragment was used for alignment and construction of the sequence contig of ebp2 (EBP2 outbreak) which was compared to the reference sequence of C127 isolate.The primer sets used for the amplification and sequencing of C987954) . To cove 68\u2013535) . The PCRwww.graphpad.com) and the R statistical software (version 3.3.2). Differences in proportions were evaluated by chi-square (\u03c72) or Fisher\u2019s exact tests, as appropriate. Shapiro-Wilk test was performed to evaluate the normality distribution of variables. Differences in means or medians of antibody levels among the groups were performed using one-way ANOVA or Kruskal-Wallis test followed by Tukey\u2019s or Dunn\u2019s post hoc test, as appropriate. Multivariate logistic regression models were built to describe independent associations between covariates and antibodies against EBP2 and DEKnull-2. All analyses were considered statistically significant at the 5% level (P < 0.05).The graphics and analysis were performed using GraphPad Prism version 9 . In addition, the well-characterized P. vivax MSP1-19 was included as a highly immunogenic P. vivax blood-stage antigen. The demographic, epidemiological and immunological data at enrollment of case and non-case are summarized in In the P. vivax outbreak showed detectable antibodies at 3 and 12-month of follow-up Recurrence\u2013individuals who experienced at least one additional episode of blood-stage P. vivax infection after their primary clinical attack (n = 6); (ii) No-recurrence\u2013cases who did not have additional P. vivax blood-stage infections (n = 10).Next, we sought to investigate the influence of MSP1-19 . IndividP. vivax recurrent infections, it was not possible to detect booster effect on EBP2 or DEKnull-2 IgM antibodies, as all individuals remained with undetectable antibody response . The profile of IgG antibodies to the DBPII-outbreak allele was similar to DEKnull-2, although the frequency and intensity of IgG response to DBPII seems to decrease more rapidly .The evaluation of naturally acquired antibody response to EBP2 among individuals who experienced their first and brief P. vivax malaria outbreak group, the proportion of EBP2 responders was quite different in individuals living in the malaria endemic region . Moreover, multiple malaria episodes were associated with much higher levels of IgG antibodies (p<0.0001). A similar profile of antibody response was observed with DEKnull-2 whose immune response has been associated with strain-transcending antibodies [P. vivax infection was not sufficient to induce significant IgM/ IgG antibodies to either EBP2 or DEKnull-2. Unexpectedly, EBP2 antibodies were not boosted by P. vivax recurrent infections following antimalarial treatment; at the time of the outbreak, it was not possible to differentiate recrudescence due to therapeutic failures or relapses arising from persistent liver stages of the parasite (hypnozoites) [P. vivax infections, as well as antibodies against the homologous DBPII variant linked to the outbreak. These results with DEKnull-2 are of interest because we and others have demonstrated before that naturally acquired DBPII antibodies tend to be short-lived and biased towards strain-specific responses [Efforts to prioritize bed EBP2 ,27. Whilbed EBP2 ,27,34, wtibodies . The resozoites) . On the esponses ,32. Of Iesponses , reinforP. vivax primo infections are not known, it is possible to speculate that polymorphisms in ebp2 gene could be a factor, as data from Southeast Asian suggested that region II of EBP2 is highly polymorphic [ebp2 gene does not seem to explain EBP2 low immunogenicity because the ebp2 allele from the P. vivax outbreak showed high sequence identify to the reference Cambodian isolated C127 (used here as recombinant protein). Specifically, these alleles differed by a single nucleotide polymorphism (G1057A). A more plausible explanation to the low immunogenicity of EBP2 in P. vivax primo infected may be related to the host cell specificity as we have previously demonstrated that EBP2 binding properties is much more restricted than observed for DBPII, linking preferentially to Duffy-positive immature bone marrow reticulocytes (CD71high) [Plasmodium reticulocytes invasion takes less than one minute [P. vivax infections are required to induce a significant and specific antibody response. This evidence supports our finding that a low EBP2 immunogenicity after a first P. vivax infection is followed by high recognition (>90%) after long-term malaria exposure in the Amazonian area. These results, although there are limitations of small number of sample size from the P. vivax outbreak, were reinforced by related studies that showed EBP2 was poorly immunogenic in PNG children but anti- EBP2 antibodies levels were positively correlated with age and cumulative exposure [P. vivax reticulocyte invasion are necessary to further validate EBP2 as a potential candidate to partner with DBPII and related antigen, DEKnull-2, in a multivalent blood-stage vaccine against P. vivax.Although the reasons for the absence of anti-EBP2 booster in ymorphic . HoweverD71high) . In addie minute ,42,43, te minute ,44,45. Cexposure ,27, and exposure , furtherP. vivax exposure, but it is highly immunogenic after long-term malaria exposure. Finally, the findings further supported the potential of DEKnull-2 as a vaccine candidate, as in non-immune individuals anti-DEKnull-2 IgG antibodies were boosted after brief exposure to P. vivax blood stages.Taken together, our results showed that EBP2 was poorly immunogenic after a single and brief S1 FigTo sequence the full Duffy Binding-like (DBL) domain of EBP2, three sets of primers were designed to amplify three overlapping fragments. Positions of primers are indicated (Forward and Reverse): Fragment 1 in pink (position 201bp to 870bp), Fragment 2 in blue (position 712bp to 1262bp) and Fragment 3 in green (position 996bp to 1618bp).(TIF)Click here for additional data file.S2 FigP. vivax DBPII-outbreak in individuals with acute P. vivax infection , and relatives and/ or neighbors without malaria symptoms . The IgM and IgG antibody responses were evaluated at the P. vivax outbreak , 3, 6 and 12 months after the outbreak for both the Case and Non-Case groups. Serum reactivity was expressed as ELISA Reactivity Index (RI). Percentage (%) of antigen-specific IgM and IgG positive was expressed at the top of the graph. The dashed line represents RI = 1. Samples with RI > 1.0 are considered positive. (B) Heatmap of influence of P. vivax recurrence on the IgM and IgG antibody responses to P. vivax DBPII. Individuals who experienced first P. vivax malaria infection (Cases) were grouped into: (i) Recurrence (n = 6)\u2013individuals who experienced one or two additional recurrent P. vivax infection; and (ii) No-recurrence (n = 10)\u2013individuals who did not have additional blood-stage P. vivax infection. The color gradient indicates the intensity of IgM (red) and IgG (blue) antibody levels categorized by tercile in High (Upper tercile), Medium (Second tercile) and Low (First tercile) for each protein. The time points of follow-up study and recurrent P. vivax infection moment were indicated at the heatmap.(A) Frequency and level of IgM and IgG antibody against (JPG)Click here for additional data file.S3 FigP. vivax recurrence; (B) Parasitemia (parasites/\u03bcL) (dashed line) and IgG antibody level against DBPII (continuous line), expressed by Reactivity index (RI) for each individual experienced P. vivax recurrence (3 and 6 months after the first P. vivax infection). The x-axis represents the time of P. vivax recurrence (3 and 6 months after the first P. vivax infection).(A) Parasitemia (parasites/\u03bcL) (dashed line) and IgG antibody level against DEKnull-2 (continuous line), expressed by Reactivity index (RI) for each subjects experienced (TIFF)Click here for additional data file.S1 Table(DOCX)Click here for additional data file.S1 Data(XLS)Click here for additional data file."} +{"text": "Klebsiella pneumoniae (ESBL-KP) and Escherichia coli (ESBL-EC) present a high burden in both communities and healthcare sectors, leading to difficult-to-treat infections. Data on intestinal carriage of ESBL-KP and ESBL-EC in children is scarce, especially in sub-Saharan African countries. We provide data on faecal carriage, phenotypic resistance patterns, and gene variation of ESBL-EC and ESBL-KP among children in the Agogo region of Ghana.Extended-spectrum beta-lactamase (ESBL)-producing blaSHV, blaCTX-M, and blaTEM were identified by PCR and further sequencing.From July to December 2019, fresh stool samples were collected within 24\u00a0h from children\u2009<\u20095 years with and without diarrhoea attending the study hospital. The samples were screened for ESBL-EC and ESBL-KP on ESBL agar and confirmed using double-disk synergy testing. Bacterial identification and an antibiotic susceptibility profile were performed using the Vitek 2 compact system . ESBL genes, blaCTX-M-15 was the most prevalent ESBL gene detected. blaCTX-M-27, blaCTX-M-14, and blaCTX-M-14b were found in non-diarrhoea stools of children, whereas blaCTX-M-28 was found in both the diarrhoea and non-diarrhoea patient groups.Of the 435 children recruited, stool carriage of ESBL-EC and ESBL-KP was 40.9% (n/N\u2009=\u2009178/435) with no significant difference in prevalence between children with diarrhoea and non-diarrhoea. No association between ESBL carriage and the age of the children was found. All isolates were resistant to ampicillin and susceptible to meropenem and imipenem. Both ESBL-EC and ESBL-KP isolates showed over 70% resistance to tetracycline and sulfamethoxazole-trimethoprim. Multidrug resistance was observed in over 70% in both ESBL-EC and ESBL-KP isolates. The blaCTX-M-15 is noteworthy, highlighting the importance of both the population as a possible reservoir. This study reports for the first time the ESBL gene blaCTX-M-28 among the studied populations in Ghana.The carriage of ESBL-EC and ESBL-KP among children with and without diarrhoea in the Agogo community with a high prevalence of Klebsiella spp. and Escherichia coli, are rapidly spreading among other bacteria through plasmid-mediated horizontal gene transfer [E. coli (ESBL-EC) and Klebsiella pneumoniae (ESBL-KP), are of great concern. Most E. coli and K. pneumoniae are causative agents of infections such as bacteraemia, urinary tract infections, and diarrhoea, particularly among children, both in hospital and community settings [Antimicrobial resistance (AMR) is one of the top ten global health threats to humans . In 2019transfer . In additransfer . In SSA,settings , 8.E. coli and K. pneumoniae isolated from the stools of patients are not typically associated with diarrhoea except for diarrheagenic E. coli such as STEC and EHEC [Salmonella enterica, are associated with severe diarrhoea. In case such pathogens are ESBL producers, they can cause difficult-to-treat infections leading to life-threatening complications. Recent studies have highlighted faecal carriage as a significant reservoir of ESBL-producing bacteria in hospitals and communities [Admittedly, and EHEC . Howevermunities , 11. Themunities , and henmunities \u201316.This study is of descriptive nature, aiming to determine the prevalence, antibiotic resistance, and gene variation of ESBL-EC and ESBL-KP in children with and without diarrhoea attending a rural hospital and child clinic in Agogo, Ghana.A cross-sectional study was conducted to determine the frequency, antibiotic resistance, and gene variation of ESBL-EC and ESBL-KP in children with and without diarrhoea. The study was conducted at the Agogo Presbyterian Hospital (APH) and a selected Child Welfare Clinic (CWC) where parents and guardians take their children under five years of age for routine check-ups in the Agogo community in the Asante Akyem municipal district, Ashanti region of Ghana. Between June and December 2019, children below five years of age living in Agogo and nearby communities were recruited and categorized into either of the following two groups: (1) children with diarrhoea or episodes of diarrhoea within the last 72\u00a0h; and (2) children without diarrhoea and symptom free, attending CWC for routine immunization and growth monitoring with no history of diarrhoea for at least one month before study enrolment.The study was approved by the Committee on Human Research, Publications, and Ethics at the School of Medical Science, Kwame Nkrumah University of Science and Technology, Kumasi, Ghana, and by the German Medical Association (CHRPE/AP/593/17 and (CHRPE/AP/119/22). All participants were informed about the purpose of the study. Written informed consent was obtained from the parent or guardian of each child before study enrolment.Stool samples were collected in sterile containers. In cases where stool samples were not readily available, a team member followed up with the parent or guardian to obtain a stool sample within 24\u00a0h. All stool samples were transported in a cool box at 2\u20138\u00a0\u00b0C to the laboratory of the Kumasi Center for Collaborative Research in Tropical Medicine (KCCR) for analysis within 4\u00a0h of taking the sample. To monitor the temperature, one pack of ice cool aid was put in a Va-Q-bagi with a thermometer.E. coli and K. pneumoniae, were selected and sub-cultured on blood agar (Columbia Agar supplemented with 5% sheep blood) for isolation of pure colonies. The VITEK 2 Compact system, using Gram-negative bacteria identification (GN ID) cards and antibiotic susceptibility testing (AST) N214 cards, was used for identification and antimicrobial susceptibility profiling of the bacterial isolates . Tested antibiotics included penicillin , carbapenems , fluoroquinolones (ciprofloxacin), tetracyclines (tetracycline), aminoglycosides (gentamicin), and trimethoprim/sulfamethoxazole. Results were interpreted according to the guidelines of the European Committee on Antimicrobial Susceptibility Testing .Stool samples were cultured on two MacConkey agar plates supplemented with 1\u00a0mg/L ceftazidime and 1\u00a0mg/L cefotaxime, respectively. Plates were incubated at 35\u201337\u00b0C for 18\u201324 hours in a normal atmosphere. Lactose-fermenting colonies (not more than three colonies), with typical morphology presumptive of E. coli and K. pneumoniae were considered MDR if they were resistant to at least three classes of antibiotics. Quality control of each batch of the MacConkey agar containing 1\u00a0mg/L ceftazidime and 1\u00a0mg/L cefotaxime was performed using E. coli ATCC 25922 and a blaCTX-M positive E. coli.ESBL-producing bacteria were further confirmed using the combined double-disk synergy test with cefotaxime and ceftazidime alone or in combination with clavulanic acid as described by the EUCAST guidelines version 9.0 (2019). In this study, ESBL-producing isolates of blaCTX-M (cefotaximase-Munich), blaTEM (Temoneira), and blaSHV (sulfhydryl variable enzyme) as described elsewhere [blaCTX-M genes, previously designed specific target primers (Table\u00a0https://cge.cbs.dtu.dk/services/ResFinder).For all confirmed ESBL-producing bacteria, a 10\u00b5L loopful of overnight pure colonies were transferred into saline. The solution was briefly vortexed, and the supernatant was discarded. The pellet was treated with 100 \u00b5L TE buffer (10:1) and heated at 95\u00a0\u00b0C for 5\u201310\u00a0min. The mixture was then centrifuged for 2\u00a0min. The supernatant containing the DNA was used for PCR analysis and sequencing. Molecular characterization of ESBL genes were performed by PCR for the presence of lsewhere , 18. To Sociodemographic, clinical, and microbiological data collected from children enrolled in the study were entered and cleaned in Microsoft Excel. Statistical analyses were performed using the R language for statistical computing, version 4.0.2. All study variables were categorical and presented as frequencies with percentages. The Chi-squared test was used to compare count data against the null hypotheses. In cases where a cell had 5 values or less, Fisher\u2019s exact test was applied.A total of 435 children were enrolled in the study. Of these, 47.1% n/N\u2009=\u2009205/435) presented with diarrhoea to the study hospital, and 52.8% (n/N\u2009=\u2009230/435) were recruited among children without diarrhoea (Table\u00a005/435 prp\u2009=\u20090.74), ESBL-EC (p\u2009=\u20090.52) or and ESBL-KP (p\u2009=\u20090.74) detection and ESBL-EC and ESBL-KP positivity were similarly distributed between male and female gender (p\u2009=\u20090.48).Table\u00a0p\u2009=\u20090.75). In total, 187 ESBL-producing isolates (168 ESBL-EC and 19 ESBL-KP) were identified from 178 ESBL-positive children. Nine of the children had both ESBL-producing E. coli and K. pneumoniae . It was observed that almost all children without diarrhoea who were positive for ESBL bacteria had ESBL-EC, whereas ESBL-EC was present in 89.5% (n/N\u2009=\u200977/86) of children with diarrhoea who were positive for ESBL bacteria. Almost all the ESBL-KP isolates (n\u2009=\u200913) were observed in children that had diarrhoea .Out of 435 children, 178 (40.9%) carried ESBL-producing isolates (ESBL-EC and ESBL-KP) among 187 ESBL-EC and ESBL-KP isolates from 178 children. The majority were blaCTX-M positive , while a few carried blaCTX-M/blaTEM , blaSHV , or blaTEM . In two of the phenotypically confirmed ESBL isolates, none of the three genes were identified.Figure\u00a0blaCTX-M, the majority of them were of group blaCTX-M-1 . Sequencing of the PCR product revealed blaCTX-M-15 as the most common type of the blaCTX-M-1 group, and the remaining types were blaCTX-M-3 and blaCTX-M-28 . Among the genes in group blaCTX-M-9, more than half were blaCTX-M-27 while the remaining were blaCTX-M-14 and blaCTX-M-14b . Among isolates carrying the prevalent ESBL gene blaCTX-M genes (n\u2009=\u2009178) were equally distributed between the diarrhoea and non-diarrhoea stool types. blaTEM (n\u2009=\u20092) and blaSHV (n\u2009=\u20092) were rare and isolated from non-diarrhoea and diarrhoea stools, respectively. Among the blaCTX-M genes, blaCTX-M-1 was also distributed uniformly between the diarrhoea and non-diarrhoea samples whilst type blaCTX-M-9 was all isolated from non-diarrhoea stool samples . In terms of distribution of ESBL genes identified, blaCTX-M-15 were found to be the most common type in both bacterial species, with 85% (n/N\u2009=\u2009143/168) and 79% (n/N\u2009=\u200915/19) abundance in ESBL-EC and ESBL-KP, respectively. The genes blaCTX-M-3, blaCTX-M-28, and blaSHV-12 was found in both organisms, whereas blaCTX-M-27, blaCTX-M-14 and 14b, and blaTEM were only found in ESBL-EC. In terms of the distribution of beta-lactamase genes in ESBL-EC and ESBL-KP, ESBL-EC and ESBL-KP of this study showed resistance to antibiotics commonly used in Ghana Fig.\u00a0a and b. Klebsiella pneumoniae and Escherichia coli is one of the drivers of nosocomial and community infections, globally [Carriage of ESBL-producing globally , 8. NotwOur data shows that both children with and without diarrhoea are ESBL carriers, serving as a potential transmission reservoir. The overall prevalence of ESBL producers among the study population was 40.9% Table\u00a0. Out of blaCTX-M-15 as the most prevalent gene type in both diarrhoea and non-diarrhoea stool samples for both ESBL-EC and ESBL-KP isolates in our study Fig.\u00a0, which iE. coli and K. pneumoniae were not assessed. Data on antibiotic usage in children was not assessed in this study. Also, a test to ascertain whether the genes found were plasmid- or chromosomal-associated was not done. This is important in order to understand the impact of the resistance genes on the spread of AMR. Plasmid genes, for example, are mobile and easy to transfer to other bacteria and bacterial species.In this study, sample size was not calculated prior to the study, and hence most of the calculations are exploratory and of a descriptive nature. Due to the low number of samples, e.g., for ESBL-KP, the results have to be interpreted with caution. Risk factors for the acquisition of the carriage of ESBL-producing blaCTX-M-15 as the most prevalent gene in both diarrhoea and non-diarrhoea stools of children. It also reports for the first time the presence of blaCTX-M-28 in Ghana. The high frequency of ESBL genes found in this study is alarming, considering the limited diagnostic and treatment options that are available in resource-poor countries such as Ghana. The steady and continued increase of ESBL-producing bacteria and the associated antibiotic resistance has the potential to further select for even more resistant pathogens, such as carbapenem-resistant bacteria. We recommend the need for routine screening of ESBL-producing pathogens to optimize the use of antibiotics, and we encourage additional studies to evaluate these emerging genes and their risk factors in Ghana.This study highlights the high frequency of stool carriage of ESBL-EC and ESBL-KP among children with or without diarrhoea in the Agogo community. Our study highlights the importance of this population as a possible reservoir and suggests that it may pose a risk for the transmission of drug resistance throughout the wider community. The study also found"} +{"text": "To explore the boost effect on ameliorating functional constipation in elderly patients through empowerment-based, healthy dietary behavioral intervention.In this randomized parallel group study, elderly patients with functional constipation were recruited and assigned to the experimental and control groups at a ratio of 1:1. The control group received routine intervention. The experimental group received 3-month empowerment-based intervention. The results were evaluated based on the Healthy Lifestyle and Personal Control Questionnaire (HLPCQ) and Cleveland Clinic Constipation Score (CCS). GraphPad Prism (Version 9) software was used for the statistical analysis.As the world's population ages, functional constipation in the elderly has attracted widespread attention. The practical behavioral intervention to ameliorate constipation are worth exploring.Sixty elderly patients with functional constipation.P > 0.05). After the intervention, the scores of HLPCQ (77.90 \u00b1 14.57 vs. 61.11 \u00b1 13.64) and CCS (7.48 \u00b1 3.73 vs. 9.70 \u00b1 3.07) in the experimental group were significantly higher than those in the control group (P < 0.05).The study results showed no significant difference in the baseline data between the two groups (The results showed that empowerment-based intervention can effectively strengthen the healthy dietary behavior of elderly patients. Through patient empowerment, the subjective initiative and willingness to communicate were boosted in the experimental group. Their symptoms of functional constipation improved considerably better than in the control group. Constipation, as one of the geriatric syndromes, affects up to 11.7% of older citizens every year, and its prevalence is still rising . ComplicTherefore, a concept of empowerment is introduced here. In the medical field, empowerment means that people have the ability to control their own lives and improve their health by enhancing their ability to solve important problems . RespondThis randomized parallel group study was conducted at a tertiary hospital in Wuxi, China from 2020 to 2021. The study was single-blinded for the participants but not for the researchers, and a randomized list was generated by a statistician who used a computer to determine the group assignment of every participant. The main outcome indicator was healthy behavior, and the secondary outcome indicators were constipation symptoms. See Sixty participants with functional constipation were recruited from July to October 2020. A total of 30 participants were allocated to the experimental group, which received empowerment-based behavioral intervention for 3 months and 30 participants in the control group were given routine care.The inclusion criteria were as follows: (1) older adults (more than 65 years of age), (2) the Rome IV criteria for functional constipation including dyssynergic defecation and slow colonic transit, officially published by the American Gastroenterology Association in 2016 , (3) avaThe exclusion criteria were patients (1) with evolving cancer, serious cognitive problems, a psychiatric disease, or a serious medical or health condition that would hinder their ability for defecation, (2) having received abdominal and intestinal surgery, (3) with intestinal obstruction by electronic enteroscopy, (4) with rectal prolapse and internal hemorrhoids rated as grade 3\u20134 and (5) taking drugs affecting the intestinal motility.n1 = n2 = [(Z1 \u2013 Z\u03b1/2 + Z1\u2212\u03b2)2* and based on the following formula: (1) Internal factors: improve patients' internal beliefs and attitudesThe intervention was mainly based on empowerment theory and was conducted around three elements of empowerment. Patients in the experimental group diagnosed with functional constipation were managed by a constipation treatment panel, which consisted of 6 nurses, 4 doctors and 3 nutritionists who had qualification certificates and rich work experience. In addition, 4 graduate students provided assistance. The research contents to meet the three elements of empowerment were as follows:Patients are encouraged to express their feelings and dietary preferences. Through explanation and goal setting, patients can obtain a clear understanding of functional constipation and establish confidence in overcoming constipation.(2) Interaction elements: enhances the interaction between patients and the environment and promotes the acquisition of patients' knowledge, skills and resourcesMedical staffs and patients worked together to analyse the causes and mainly sorted out the following factors of functional constipation: (1) Medical staff: lack of cooperation of specialized nurses, doctors and nutritionists; (2) patient: the general diet regulation failed, the large adjustment cannot be adhered to and the healthy dietary behavior was poor; (3) methods: lack of individualized home assessment and professional guidance; (4) resources: lack of health education materials for constipation; lack of resources and skills to effectively ameliorate constipation.The corresponding intervention to promote patient empowerment were as follows: (1) medical staff: establishment of a constipation treatment panel; (2) patient: establishment of a plan based on empowerment theory, interaction with patients by high-fiber food models and cards during health education and testing of patients' knowledge and familiarity diet matching skills; (3) methods: the patients were required to record their 24 h fluid intake, dietary intake, defecation and exercise contents in the healthy dietary behavior log. In addition, health lectures were carried out and a WeChat official account pushed relevant knowledge on the phone; (4) resources: provided patients with paper and electronic education materials and prebiotic food.(3) Behavior elements: promote patients to adopt healthy dietary behaviorThe nutritionist designed the patient's recipe based on the recommended nutrient intake in the P < 0.001, and the KMO value was 0.797 by Kaiser-Meyer-Olkin measurement, suggesting a good reliability and validity in applying HLPCQ was developed by Darviri in 2014, and it focuses on the evaluation of the healthy behavior of patients . The totng HLPCQ . In adding HLPCQ . The totying CCS . The tooThe distribution and collection of questionnaires were the responsibility of five designated researchers. In order to reduce the bias in the research process, the researchers were able to unify the guidance language and accurately master the knowledge related to functional constipation, scale content and data collection methods. General information about patients can also be obtained through medical care records and archives of the community. Besides, all subjects included in the study completed the 3-month trial, and all data collection was completed.t-test was performed. The enumeration data were statistically described by frequency and percentage and analyzed using the Chi-square test, and P < 0.05 was defined as statistically significant.The statistical analysis of the data obtained from the study was performed using GraphPad Prism(Version 9) software. The measurement data with normal distribution were expressed as mean (M) and standard deviation (SD) and a Student's P > 0.05) (The > 0.05) .P > 0.05). The overall average in the experimental group (77.90 \u00b1 14.57) was significantly greater than that in the control group (61.11 \u00b1 13.64) after the intervention (P < 0.05). The scores in four out of five dimensions, including healthy dietary choices (21.48 \u00b1 3.15 vs. 12.55 \u00b1 2.71), dietary harm avoidance (12.68 \u00b1 2.94 vs. 8.53 \u00b1 1.50), daily routine (23.37 \u00b1 4.72 vs. 14.11 \u00b1 3.89) and social and mental balance (14.72 \u00b1 2.27 vs. 11.01 \u00b1 1.64), showed that the empowerment results of the experimental group were significantly better than those of the control group (P < 0.05). Among these, 14/26 questions received higher scores in the experimental group than in the control group (P < 0.05).P > 0.05). After intervention, the overall mean score of the experimental group decreased to 7.48 \u00b1 3.73, which was less than that of the control group (9.70 \u00b1 3.07) (P > 0.05). Significantly, 5/8 clinical problems, including defecation frequency (0.20 \u00b1 0.41 vs. 1.61 \u00b1 0.62), difficult defecation (1.10 \u00b1 1.08 vs. 1.99 \u00b1 1.08), incomplete evacuation (1.17 \u00b1 1.03 vs. 1.99 \u00b1 1.15), straining period of defecation (0.86 \u00b1 0.74 vs. 2.33 \u00b1 1.10) and defecation failure (0.58 \u00b1 0.73 vs. 0.96 \u00b1 0.58), in the experimental group received lower scores than those in the control group (P < 0.05).In this study, the majority of patients with functional constipation were women, which is consistent with the domestic and international epidemiological data . The reaThe overall average scores of healthy behavior of 60 patients with functional constipation were lower than Darviri's survey results on the healthy population , indicatOn the basis of the 24-h review, the patient adhered to our recommendations. After the intervention of empowerment-based behavioral intervention, improvements were observed in the patients' healthy dietary behaviors, such as being careful about the amount of food put on their plate, calculating meal calories, consuming organic foods and whole-wheat products and avoiding packaged or fast food. Individuals who do not eat whole grains, fresh fruits and vegetables every day have a high prevalence of constipation . In the The main purpose of patient empowerment is to encourage patients to adopt healthy dietary behavior and the final outcome is the improved symptoms of constipation. Compared with the control group, the constipation symptoms of patients in the experimental group, including defecation frequency, difficult defecation, incomplete evacuation, straining period of defecation and defecation failure, significantly improved. Rasmussen reported that the defecation frequency during daytime is the most prominently associated with health-related quality of life . TherefoThis study showed that empowerment-based intervention plays a considerable role in improving the healthy dietary behavior of elderly patients with functional constipation. The intervention yielded a significant positive performance in the experimental group, addressed the problems experienced by elderly patients due to functional constipation, resulted in a decreased constipation severity and improved the quality of life. Besides, strengthening the subjective initiative of elderly patients through patient empowerment can reduce patients' over-dependence on medical resources. Therefore, this study provides a non-pharmacological treatment reference for elderly patients suffering from functional constipation.The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author.The studies involving human participants were reviewed and approved by the Ethics Committee of Wuxi Third People's Hospital (No. IEC201803001). The patients/participants provided their written informed consent to participate in this study.XZ, FZ, DL, and JC designed the trial. YX and XZ conducted the study. YX, XZ, DL, and QW analyzed and interpreted the data. XW, FZ, DL, and HC were responsible for project administration and supervision. XW, XZ, FZ, and DL wrote the first draft of the manuscript and had primary responsibility for the manuscript's final content. All authors critically revised the manuscript. All authors contributed to the article and approved the submitted version."} +{"text": "GUEST EDITOR Genes & Development. Here we celebrate over 50 years of Terri's many accomplishments and awesome staying power, first as a bench scientist with a stellar decades-long career in molecular genetics in Boston and Cold Spring Harbor, and then as Editor of G&D since 1989\u2014where did the time go?It is difficult to imagine the scientific publishing landscape without Terri Grodzicker at the helm of Bacillus subtilis spores hitching a ride on my clothes). Terri and I became lifelong friends, colleagues, and collaborators, and eventually comentors of graduate students after I moved to Berkeley for my first and only faculty position. In those 3 years at CSHL, I came to appreciate Terri's exacting scientific standards, her exceptional ability to consistently formulate the original ideas that guided her own work, and her capacity to perceive the significance of discoveries by others. For as long as I have known her, and in the many contexts I have seen her operate, I have been struck by the high quality of her work and character.My high regard for Terri dates back to my very first weeks at Cold Spring Harbor Laboratory in 1976, where I, as a newly minted PhD trained in bacterial and phage biochemistry, was attempting to do mammalian tissue culture. With firm but gentle guidance from Terri, I finally managed after some weeks not to contaminate CV1 monkey cells with bacteria . Terri early on saw the potential impact of the idea that in addition to TBP, there were components we called TBP-associated factors (TAFs) that make up an integral part of the TFIID complex. Hence, Terri published several of our first papers on human and Drosophila TAFs, having recognized their potential role in both TATA-box-containing and TATA-less promoters despite a fair amount of initial skepticism about their importance . Indeed, it was not generally accepted at the time that TBP may not be the only or even the most functionally important subunit of TFIID. Likewise, the idea that a TAF\u2013TBP ensemble might serve as a new class of coactivators needed to communicate between sequence-specific enhancer-bound transcription factors and the core promoter machinery seemed overly complex and was deemed somewhat \u201cinelegant.\u201d Terri's willingness to take the risk and publish these findings speaks volumes about her vision and boldness to buck the popular trend. Most critically, as Editor, she was knowledgeable and self-assured enough to recognize potential biases of reviewers and the conviction to override misguided recommendations\u2014an intellectual independence that is indispensable for good editorial judgment but is sadly in short supply these days.Having published 46 papers in G&D from our lab. One review challenged the invocation of phase separation in high-profile journals engorged with reports of the often misinterpreted and trendy role of liquid\u2013liquid phase separation (LLPS) in cellular processes. Indeed, basic experiments to rigorously demonstrate even the occurrence of bona fide LLPS in vivo had been largely absent from the majority of such publications. Equally glaring was the propensity to infer mechanism or causation from the mere presence of puncta in cells without ascertaining what function phase separation performs under physiologically relevant cellular contexts for diverse biological reactions, including transcription. The G&D review laid bare many of these issues and triggered some in the field to reflect more rigorously about the true role of LLPS and even to question whether some observed puncta are, in fact, phase-separated compartments at all.By way of further illustration, Terri's perceptive intuition and willingness to go against the grain can be seen in two review papers published more recently by G&D had to exercise some courage to risk publishing innovative ideas that squarely went against the grain of popularly accepted paradigms.Another example of Terri's bold editorial thinking was her willingness to publish a rather unconventional model that challenges the long-held and seemingly established concept of transcription factor-mediated DNA looping as the primary mechanism for long-distance enhancer\u2013promoter interactions. The TF activity gradient (TAG) model proposed by G&D, we debated who might be a candidate with the \u201cdecisiveness and backbone\u201d to go against engrained trends and make tough, sometimes risky decisions. Having collaborated closely with Terri on a variety of successful research projects, I was convinced she had that strength of conviction as well as the scientific chops to transcend the conventional and strike into new directions as Editor of G&D. After several decades with Terri at the helm, we all can see that she not only has exercised her intuition for good science, but most importantly, has time and again acted on her strength of conviction. For this, the many students, postdocs, and early career scientists seeking to publish paradigm-shifting research and bleeding-edge findings thank her. We all owe Terri a debt of gratitude for helping to launch many careers rather than crushing aspirations to toe the line of popularity and trendy publishing.In the early days when CSHL was seeking an inaugural editor for The current landscape of traditional scientific publishing is in clear turmoil; some would say it is a badly broken system exacerbated by the increasing challenge of finding truly savvy, experimentally sophisticated Editors with astute scientific judgment among the ever-expanding ranks of commercially motivated scientific journals. Once again, we find ourselves at a critical juncture of the scientific enterprise as top-tier journals pose major obstacles to rapidly and robustly vetted papers that are not merely chasing the latest trends. We can only hope that a few next-generation Terri Grodzickers will step up to steer scientific publications into a more effective, transparent, and equitable future. These are undoubtedly big shoes to fill, and we will profoundly miss the reliable, inspired judgment and good taste of Terri Grodzicker."} +{"text": "All four parameters had a statistically considerable effect on studied responses. Blanching for 5 minutes at 1% of NaOH solution, using an appropriate concentration of antibrowning agent (5% Na2S2O5), and drying at 70\u00b0C with 30% of relative moisture can lead to better preservation of grapes' appearance and quality (chromaticity (C\u2217) and color change (E\u2217)). Also, in these conditions, a lower browning rate (14.48%), a lower 5-hydroxymethylfurfural content (12.40\u2009mg/100\u2009g DW), and a higher level of polyphenols (135.79 \u00b1 13.17\u2009mg GAE/100\u2009g DW) and flavonoid content (57.81 \u00b1 3.08\u2009mg Qeq/100\u2009g DW) have been recorded while meeting international standards for SO2 content and microbial quality.Drying is a common technique in the agrifood industry, but insufficient control in the drying process can result in changes to the fruit's appearance due to physiological damage during processing. The aim of this study was to investigate the impact of pretreatment and drying process parameters on Moroccan raisins' quality and safety. The experimental levels of pretreatment factors and drying temperature were defined at the beginning. Subsequently, a 2 Imports have surged consistently over the years, soaring from 563 tons in 2016 to a staggering 12,418 tons in 2021 [2CO3 [Thompson Seedless grapes, using a 24-factorial design to analyze the effect of factors like NaOH concentration, antibrowning agent concentration, temperature, and relative humidity on drying time, color change (E\u2217), chromaticity (C\u2217), and browning rate. The final step entails evaluating the quality of dried grapes under optimal conditions, ensuring compliance with international standards for dried grapes.Grapes hold a prominent position among globally cultivated fruits, with a staggering 84.7 million tons produced worldwide in 2021 [nd cakes . Yet, thnd cakes . To thisnd cakes . The funduration , 8. Intr in 2021 , undersc in 2021 investig21 [2CO3 , and NaO21 [2CO3 , 16. The21 [2CO3 . FollowiAll chemicals' products used in this work were provided by Sigma-Aldrich, except otherwise stated in the text.Vitis vinifera Thompson Seedless) were collected at commercial maturity from three Morocco localities: Meknes, El Hajeb, and Nador cities. The largest nearby market was the source of three sets of fruit samples that were taken during specific time intervals. The fruits chosen for sampling were scrutinized for their firmness, color, and any visible blemishes before being placed in polyethylene bags. Subsequently, the fruit samples were kept in cold storage at 4\u00b0C until the time for processing or analysis arrived.In order to select grapes with high-level potential for dehydration, grapes , antibrowning agent concentration (X2), temperature (X3), and relative humidity (X4), with their respective levels presented in Y1), color intensity change E\u2217 (Y2), chromaticity C\u2217 (Y3), and browning rate (Y4). The experiments were performed according to the matrix presented in In order to determine optimal conditions for drying grapes, a 2L\u2217), redness (a\u2217), and yellowness (b\u2217). Saturation , total color change (E\u2217), and browning rate were generated by L\u2217, a\u2217, and b\u2217 color coordinates using the following equations [The surface color of dried apricot samples was measured by a spectrophotometer Minolta CR:5 to determine the lightness (quations :(2)C\u2217=aWater content was measured with a moisture determination balance ; briefly, the samples are put into the machine in a small stainless steel tray, the weight of which is already taken by the machine; the sample is heated in the chamber to a temperature of 100\u00b0C, and each time the weight is measured until it has a stable value.aw is determined at 25\u00b0C with an aw meter .The Grape samples are homogenized and filtered; then, \u00b0Brix was estimated using a refractometer at 25\u00b0C.A quantity of 20\u2009g of grape samples was homogenized for five minutes and filtered using a paper filter. The filtrate was analyzed to determine its pH value using a pH meter from Weilheim, Germany.To rehydrate the samples, they were immersed in distilled water at a temperature of 25\u00b0C for five hours. The grape-to-water ratio used was approximately 1\u2009:\u200930, according to the methodology employed by . The RC \u03bcl of extract was mixed with 250\u2009\u03bcl of the Folin\u2013Ciocalteu reagent and 500\u2009\u03bcl of Na2CO3 (20%). The sample was thoroughly mixed and incubated in a water bath for 30\u2009min at 37\u00b0C. Subsequently, the absorbance at 750\u2009nm was measured by a spectrophotometer (V-630 UV-Vis Spectrophotometer JASCO (USA)). The result was expressed as milligram of gallic acid equivalent per 100\u2009g of dry weight (GAE)/g DW.Polyphenols were extracted according to the method described by . 2\u2009g of \u03bcl of the extract is added to 1500\u2009\u03bcl of methanol (96%), 100\u2009\u03bcl of 10% aluminum chloride (AlCl3), 100\u2009\u03bcl of sodium acetate (1\u2009M), and 2800\u2009\u03bcl of distilled water. The mixture is stirred and incubated at room temperature in the dark for 30\u2009min. The absorbance is measured at 415\u2009nm using a spectrophotometer (V-630 UV-Vis Spectrophotometer by JASCO (USA)). The TFC are expressed in milligram of quercetin equivalent/100\u2009g of dry weight, referring to calibration curve of quercetin.TFC were determined using the method explained by ; 500\u2009\u03bcl The process outlined by De-Kok and Graham involved\u03bcm HPLC filter, and then injected in HPLC ultimate 3000 equipped with a column Lichrosorb RP-18 . A methanol water solution was employed at a flow rate of 1.0\u2009ml/min, and the detection was done at 285\u2009nm.The 5-hydroxymethylfurfural (HMF) analysis was achieved by using high-performance liquid chromatography (HPLC). Briefly, in the flask (50\u2009ml), 5\u2009ml of 0.3\u2009N oxalic acid was added to 5\u2009g of grape sample. The mixture was incubated for 60\u2009min in boiling water bath. After incubation, 5\u2009ml of 40% trichloroacetic acid was added and completed with distilled water until the mark, then filtered through a 0.45\u20092) content of the dried grapes was measured according to Monnier-William's distillation method [2 per kilogram of dried grapes.The sulfur dioxide and homogenized with a stomacher for 60 seconds at 230\u2009rpm. The resulting samples were processed for the enumeration of aerobic colonies [w/v ratio) and homogenized with a stomacher at 230\u2009rpm for 60 seconds. These samples were processed for the detection of Listeria monocytogenes [The microbiological characterization of dried grapes stored under optimum conditions involved a one-year storage period at room temperature in airtight plastic bags. Two 10\u2009g portions of each sample were placed into a sterile stomacher bag and processed according to the following protocol. One portion was diluted with buffered peptone water .To confirm the identification of a representative number of colonies from each medium, matrix-assisted laser desorption/ionization-time of flight mass spectrometry was employed according to Trabelsi et al. . A portiGrapes from each sample were homogenized, and then, three subsamples were selected for each of the three assays that were done. The ANOVA test was conducted by SPSS software, and results were shown as mean\u2009value \u00b1 standard\u2009deviation. The JMP 17 program was used to carry out the statistical analysis for the experimental design. A first-degree polynomial model was fitted to the independent and dependent variables. The impact of the variables on the various answers was then examined using the regression coefficients.Thompson Seedless variety from distinct regions and their biochemical characteristics and dehydration potential were evaluated (p < 0.05) impacted total soluble solids, chroma C\u2217, chlorophyll a and chlorophyll b, phenolic compounds, and PPO activity , drying tests were done at various drying temperatures with different concentrations in the HunterLab parameters between fresh and dried grapes across all combinations tested. And according to the analysis of variance, the factors drying temperature had a higher impact on color intensity change and chromaticity C\u2217 (C\u2217 and highest intensity of color change E\u2217 which indicated a higher level of color degradation were observed at higher temperature (80\u00b0C). And according to 2S2O5 was more efficient in all concentration than the KHSO3. Furthermore, 2% of Na2S2O5 was the lowest concentration that preserve a characteristic color of Thompson Seedless grapes. Based on the results in 2S2O5 concentration (2% and 5%) were selected to establish a factorial experimental design, in order to study their effects on the quality of our dried samples.In order to select the drying temperature and the best antibrowning agent between , color intensity changes E\u2217 (Y2), chromaticity C\u2217 (Y3), and browning (Y4) responses, respectively, in relation to the significant variables and to the interaction between them at p < 0.05. Values of R2 represent the fit between the experimental data and the proposed model. The experimental design and the obtained results for drying treatments are presented in Y1) is crucial for the quality and safety of raisins, as it prevents spoilage and ensures safe consumption. Insufficient drying time can lead to bacterial growth, while excessive time can result in loss of flavor and nutrition. Meticulous monitoring of drying time is necessary to maintain appropriate moisture levels to meet the international standards of dried fruits [X3) and decrease in relative humidity (X4) have a positive effect on the drying time of grapes (Thompson Seedless). The minimum drying duration required to reduce the moisture content to 18% [Drying time (d fruits . Numerict to 18% was 23 ht to 18% whom rept to 18% claimed E\u2217), which refers to the strength or magnitude of color change observed between two different color states. And Equation (C\u2217 can be very bright and saturated. Equations (2S2O5 concentration (X2) and drying temperature (X3) result in less intensity of color change (E\u2217) and better preservation of chromaticity C\u2217, respectively. Increasing the concentration of antibrowning agents (Na2S2O5) reduced grape color degradation (decrease of chromaticity C\u2217). Equation presentsEquation describequations and 7) E\u2217), whiY4), which is a quantitative measure of the color alteration of dried fruits, used to evaluate their quality [2S2O5 concentration and drying temperature (X3) and decrease in relative humidity (X4) result in less surface browning of dried grapes. Color preservation and reducing browning surface of raisins need an increase of Na2S2O5 concentration (X2) and drying temperature (X3) and a decrease in relative humidity (X4) (equations (Y1) and the contact time of the grapes with the oxygen in the air, which is an important element for enzymatic browning reactions. Similarly, [2S2O5)) is often used to release sulfur dioxide (SO2) [vr. Thompson Seedless), the use of these agents can help to maintain the natural golden color during the drying process [2S2O5) reacts with water (H2O) (during soaking step) to form sodium hydrogen sulfite (NaHSO3) in aqueous solution and sulfur dioxide (SO2) in gaseous form. The higher Na2S2O5 concentration (5%) can preserve grape color from degradation. These results can be explained by the fact that a higher concentration of Na2S2O5 solution preserves more golden color through direct enzymatic inactivation of PPO, which occurs due to a modification of the tertiary structure of the PPO enzymes caused by the reduction of disulphide bond [Equation presents quality . This me quality . Equatioquations , 7), an, anY4), quations ). The inmilarly, observedmilarly, to be thde (SO2) , commonl process . Sodium ide bond . Furtheride bond .2S2O5) are compared with grapes treated under the same conditions without the use of the antibrowning agent (0%) (control) [aw) and water content are important parameters to consider grapes as they affect the quality, shelf life, and safety of the product [aw values of dried grapes were 17.90 \u00b1 1.80% and 0.52 \u00b1 0.06, respectively. Those results are in accordance with the Codex standard for dried grapes [2S2O5 concentration dry faster than the untreated grapes (p < 0.05) compared to the nontreated samples. The level of TPC (135.79 \u00b1 13.17\u2009mg AGE/100\u2009g DW) and TFC content (57.81 \u00b1 3.08\u2009mg Qeq/100\u2009g DW) was higher by 34.50% and 33.62%, respectively, compared to the control samples. This difference can be explained by the fact that antibrowning agent used can limit the degradation of TPC by deactivation of PPO and POD enzymes responsible for degradation of phenolic compounds [2S2O5. Thus, it can be concluded that browning is primarily caused by nonenzymatic browning (Maillard's reaction) at optimal conditions.Despite the technological effectiveness of sulphites on the preservation of the dried fruit quality, their use is regulated by the organisms such as Codex Alimentarius standards . In ordecontrol) . The res SO2/kg) . Water a product . The resd grapes which spd grapes . Previouompounds . BrowninS. Typhimurium, S. anatum, and coliforms, on various fruits, vegetables, herbs, and spices resulting from different drying techniques. This is why it was crucial to evaluate the microbial quality of our raisins. Based on the results presented in 2S2O5) which was less than the bacterial growth observed in the control sample (1460 UFC/g). No Enterobacter, coliforms, or E. coli were detected in any of the samples, which is in accordance with the maximum tolerated value of 102\u2009UFC/g listed by [4\u2009UFC/g. These results were far more less than those published by [3 to 1.5 \u00d7 104\u2009UFC/g) and fecal coliforms (1.1 \u00d7 103 to 5 \u00d7 103\u2009UFC/g) were found to be high in most varieties of raisins, which could be attributed to poor conditions during preparation, transport, and marketing. The presence of yeasts in the samples of raisins was also found with high counts. For the identification by Maldi-Tof-MS, we managed to identify four species of Bacillus family with an accuracy more than 90%: Bacillus pumilus, Bacillus subtilis, Bacillus amyloliquefaciens, and Cytobacillus horneckiae. Other species were detected but with low precision (less than 70%); we did not mention them in this study.While research has extensively explored the dehydration mechanisms leading to the inactivation and resilience of microorganisms during drying , a growiisted by . Pathogeished by , who stuished by testifieThompson Seedless. These conditions resulted in a shorter drying time and lead to improved preservation of total phenol content, total flavonoid content, and color, while meeting international standards for SO2 content and microbiological safety.Grape's conservation is challenging due to its short shelf life and susceptibility to spoilage. Drying is a common technique used in agrifood industry, but insufficient control in the drying process can lead to damaged grapes. This study demonstrated that using optimized condition and an appropriate concentration of antibrowning agent (5% metabisulfite concentration) can lead to better preservation of appearance and quality of grapes vr."} +{"text": "The current environment of volatility, uncertainty, complexity and ambiguity has created a prolonged state of uncertainty for the Jordanian hotel industry. Crisis management leadership is one of the most important attributes for a hotel. The main aim of this study is to evaluate the mediating role of crisis management, the moderating role of a leader\u2019s experience, their relationship to styles of leadership and the resultant performance of Jordanian hotels. Research was based on a self-distributed questionnaire survey of 119 respondents currently holding managerial positions in Jordanian 3 to 5 star hotels. Partial Least Square Structural Equation Modelling was then employed. The findings suggest a transformational leadership style and crisis management experience are the most important attributes for a leader to sustain hotel performance during a crisis. Leaders with a transactional leadership style need crisis management skills to sustain hotel performance rather than experience which is not as important in their case. This paper proves that different leadership styles have a different influence on a hotel\u2019s survivability during a crisis. Therefore, a hotel\u2019s management group must ensure that a leader with an appropriate leadership style takes control during these situations. By combining leadership attributes, experience, and crisis management in a comprehensive framework to ensure sustainable hotel performance in the face of a crisis, this study adds to the body of knowledge on leadership and crisis management practices. Furthermore, despite Jordan\u2019s many attractions, Jordan has been unable to make use of its touristic potential due to the large number of crises, particularly the fallout from terrorist attacks that have adversely affected its tourism industry . AccordiIn Jordan, the hotel sector was the major player in the tourism industry, accounting for around 30% of the workforce and nearly 25% of tourism revenue . As a reAccording to an annual report issued by the Jordan Hotel Association , there wThe tourism industry is very vulnerable to crises. For example, the COVID-19 pandemic is one of the most recent critical problems for Jordan\u2019s tourism sector. The pandemic has been deemed a global public health crisis and has led to many restrictions on movement and travel. This has had a serious effect on the hotel sector in Jordan which is experiencing a drastic fall in business. The pandemic, combined with terrorist attacks and major socio-political disturbances in the Middle East, have had a truly damaging effect on the tourist industry. In Jordan, a significant and steady rise in the number of tourist arrivals was experienced from 2005 to 2010, but the impact of the Middle East crisis, which worsened at the start of 2010, resulted in a significant drop in tourist numbers from 8 million in 2010 to 5 million in 2019 [The decline in tourist numbers has led to the low occupancy of hotel rooms and a significant loss of hotel stock and revenue , resultiBowers, Hall & Srinivasan , claim t22.1Early studies and analysis have shown that organisational performance and transformational leadership are significantly related. Spitzbart , found tH1The transformational leadership style has a positive relationship with hotel performance in the Jordanian hotel sector.Howell & Hall-Merenda assert tElenkov studied Chiang & Wang state thThe following hypothesis for research is therefore proposed:H2The transactional leadership style has a positive relationship with hotel performance in the Jordanian hotel sector.The following hypothesis for research is therefore proposed:2.2H3Crisis management has a positive relationship with hotel performance in the Jordanian hotel sector.Crises can adversely affect any organisation . Crisis 2.3H4The transformational leadership style has a positive relationship with crisis management in the Jordanian hotel sector.Transactional leadership employs monitoring, organisation and performance management practices to improve employee performance through the use of both incentives and sanctions. It supports the preservation and maintenance of processes and practices for crisis management created by top management. Consequently, the findings of Hasan & Rjoub and AlkhMoreover, Heuvel concludeH5The transactional leadership style has a positive relationship with crisis management in the Jordanian hotel sector.Previous studies such as those by Burns and Zhan2.4By proposing crisis management as the mediator, it will assist leaders to manage a crisis; this is because crisis management is a practice that takes into account both internal and external determinants that positively strengthen the company during the crisis. As a consequence, it can be argued that, in order for leaders with transformational or transactional leadership styles to improve their firm\u2019s performance during a crisis, they need to have crisis management skills to help them manage situational factors . This stH6Crisis management has a mediating effect on the relationship between transformational leadership style and hotel performance in the Jordanian hotel sector.H7Crisis management has a mediating effect on the relationship between transactional leadership style and hotel performance in the Jordanian hotel sector.From these arguments, the following research hypotheses are put forward:2.5H8eader\u2019s experience moderates the relationship between transformational leadership style and hotel performance in the Jordanian hotel sector.A lH9eader\u2019s experience moderates the relationship between transactional leadership style and hotel performance in the Jordanian hotel sector.A lThe integrated model framework proposed in this research is shown in Leadership experience contributes to better organisational performance . Naqvi reports 33.1The current study was explanatory in research design, utilizing a self-administered survey questionnaire to implement a cross-sectional time horizon. The analysis unit consisted of hotels classified as three-star to five-star in Jordan. The survey was conducted in Jordan by hotel owners and senior management of hotels. The data was collected in the fourth quarter of 2019 when political volatility in a neighbouring country to Jordan reached its climax and the COVID-19 pandemic started to hit the world. According to the Jordan Hotel Association (JHA) list, 158 hotels met the sampling frame criteria; these included 84 three-star hotels, 38 four-star hotels, and 36 five-star hotels. The three-star and above hotels were chosen because they have larger organisational structure relative to the other hotel classifications, as well as being more likely to have formulated strategies to deal with crisis situations . To faci3.2The study used a survey questionnaire as the research instrument. Transformational leadership (TFL) was measured using a five-dimension framework of 20-items scale adopted from Avolio & Bass and rela3.3The respondents differed in their demographic profiles which was very evident in the results of the sample profile\u2019s descriptive analysis. The survey questionnaires were distributed in the mid, southern and northern regions of Jordan to hotels classified as 3-star, 4-star and 5-star.Of the total number of respondents, 79.8% were male and 20.2% were female. 57.1% of the respondents were between the ages of 36\u201345 years. All of the respondents held managerial positions in administration, with most being in the category of department manager (71.4%). Therefore, for their respective hotels, all respondents were deemed suitable for answering the questions regarding hotel performance as well as crisis management. Furthermore, the respondents were considered to have adequate knowledge of crisis management phases as some of them (39.5%) had a working experience of not less than 14 years.With respect to the hotel profile, 54.6% of the surveyed hotels were 3-star and most of their affiliations were independent 82.4%). The majority of the respondents (71.4%) came from the mid-region because this region has most of the 3-star to 5-star hotels in Jordan [2.4%. The44.12, Q2, and R2 were calculated to evaluate the structural model. PLS-SEM analysis was conducted using SmartPLS version 3.3.7. A two-sided test with a significance level of 0.05 was used for all tests.The number of questionnaires that were valid was 119. Using SPSS 26.0, the descriptive statistics of each variable, including leadership attributes, crisis management, and hotel performance, were examined. To conduct a multiple regression analysis in complicated models like this study, partial least squares-structural equation modelling (PLS-SEM) is thought to be particularly effective for processing skewed data, exploratory research, or theoretical extensions . In the 4.2The descriptive function was calculated by the covariance matrix procedure for all variables. The initial measurement item scores were parcelled so that the composite scores of all the variables could be calculated. A parcel is the sum or average of various individual items or indicators based on the factor loadings they are place on the construct . Table 1As 4.3Before assessing the structural model, it is crucial to address the collinearity issues in the inner structural model (predictor-criterion collinearity) to avoid misleading or bias in the regression results. The multicollinearity problem arises when two or more variables are not independent of each other This can be determined through the collinearity assessment in terms of the variance inflation factor (VIF). The common rules of thumb in assessing the potential collinearity issues are the VIF value of 5 or higher . Based on the collinearity assessment analysis, the VIF value for TFL\u00a0=\u00a01.995, TSL\u00a0=\u00a02.453, and CSM is 1.987, which shows that there are no potential collinearity issues in the model. Harman's single factor test was executed to check common method bias . The unr4.4The ability to verify the validity of measurements is one of the main advantages of the SEM. In this situation, construction validity applies simply to the precision of the measurements . Constru4.4.1As The modified model was further tested in order to maintain the factor structure's reliability. The result shows that the second standardized factor loading for the items and first order constructs are higher than the cut off point of 0.6 as suggested by Hair et al. (2006). The range of the values from the second assessment is 0.652\u20130.956.The reliability of each construct is evaluated after the uni-dimensionality of the constructs has been attained. In order to evaluate reliability, the average variance extracted (AVE) as well as construct reliability (CR) and Cronbach\u2019s alpha were used. The total amount of variance displayed by the indicators of the latent construct accounts for it being reflected in the average variance extracted (AVE). The extent to which any measure is error-free is described by the Cronbach\u2019s alpha value of such a measure. The values of the Cronbach\u2019s alpha in this study are within the range of 0.710 and 0.900. This implies that all the values are also greater than the recommended value of 0.7 proposed by Nunnally & Bernstein and that4.4.2This research evaluated the discriminant validity using the HTMT Discrimination Criterion. The HTMT values for all latent constructs were less than 0.90 as seen in 4.54.5.1The direct effect of the hypotheses correlates with hypotheses 2 values were estimated at 0.497 and 0.524 for crisis management (CSM) and hotel performance (HTP). This shows that the error variance of hotel performance (HTP) is estimated to be 47.6% (100%\u201352.4%) of the calculated variance of hotel performance (HTP). This reveals that the three predictors can explain only 52,4% of variations in hotel performance (HTP). In the overall results, the R2 values were shown to meet the recommended requirement of Cohen [2 value for crisis management (CSM) together with that of hotel performance (HTP) are respectively 0.356 and 0.348. According to Chin [The Rof Cohen which is to Chin , these vAs seen in 4.5.2In testing for mediation, the SEM technique is preferred over regression techniques because SEM permits both measurements, as well as structural relationships, to be modelled in addition to yielding overall values of fit indices . Table 54.5.3According to Hair et al. , the eff2) for hotel performance (HTP) is 0.587. This value is higher than the recommended threshold of 0.3 [2) for hotel performance (HTP) was 0.369. This value was far above 0 and hence represents the model\u2019s predictive relevance [The respective value of R-squared Structural Equation Modelling (SEM) analysis showed the results from examining the direct effect hypothesis that the transformational leadership style has significant positive effects on hotel performance. The result is consistent with research by Sobaih et al. , ZumitzaTransactional leadership, on the other hand, has an insignificant effect on hotel performance, according to the study. The findings are consistent with those of Radwan\u2019s study onIn terms of the mediating effect, it has been discovered that the relationship between transformational leadership and hotel performance was partially mediated by crisis management. The outcome demonstrates that high levels of transformational leadership result in a stronger adoption of crisis management, which improves hotel performance. The relationship between transactional leadership and hotel performance, on the other hand, has been found to be fully mediated by crisis management, which shows that crisis management can account for the relationship between transactional leadership and hotel performance. This study\u2019s findings are consistent with those of studies by Zhao et al. and NitzMoreover, it was found that leader experience had a positive moderating effect on the relationship between transformational leadership style and hotel performance, while the relationship between transactional leadership style and hotel performance has a negative moderating effect. The results of the study are consistent with the Contingency Theory which argues that leadership style is contingent upon external and internal factors , and theAlthough previous research has demonstrated that crisis management has a major effect on hotel performance, such as Majli and Tamimi , Armoo , and LabBased on the descriptive statistics of the study, the mean of the variables used in this study, such as transformational leadership (mean\u00a0=\u00a03.024), transactional leadership (mean\u00a0=\u00a03.028), crisis management (mean\u00a0=\u00a03.002), and hotel performance (mean\u00a0=\u00a03004), has shown a moderate level. The highest mean is represented by the transactional leadership variables. In contrast, the lowest mean represents the crisis management variable. It indicates that there is still room for improvement for the hotel\u2019s owner/manager to improve their leadership styles, crisis management, and hotel performance. The hotel\u2019s owner/managers should be exposed to ongoing training related to crisis leadership by the various experts in the hotel industry to ensure that they are well equipped to face the prolonged crisis faced by the hotel industry.6The current environment of volatility, uncertainty, complexity and ambiguity (VUCA) has continued to contribute to unpredictability in the hotel industry. Leadership in crisis management is one of the crucial assets for the hotel industry. The conclusions drawn from the findings are the following: transformational leadership and crisis management have a direct positive effect on hotel performance which is significant. Hence, these variables should be directly manipulated and harnessed to achieve enhanced hotel performance. Transactional leadership does not directly influence hotel performance, thus suggesting the probable role of crisis management as mediator. Moreover, transformational leadership, as well as transactional leadership, have a direct, positive effect on crisis management which means these leadership styles, when effectively utilized, are vital in implementing crisis management strategies in the midst of crisis. Additionally, it was found that crisis management, followed by transformational leadership, were the most important factors for hotel performance; transactional leadership was found to be the most important element of crisis management. In addition, crisis management plays the role of a complementary/partial mediator in the relationship between transformational leadership and hotel performance while crisis management fully mediates the relationship between transactional leadership and hotel performance. Furthermore, on the one hand, it is suggested that a leader\u2019s experience positively moderates the relationship between transformational leadership and hotel performance. This indicates that an increase in the level of the leader\u2019s experience as moderator results in a corresponding positive enhancement of the effect of transformational leadership on hotel performance. On the other hand, the leader\u2019s experience negatively moderates the relationship between transactional leadership and hotel performance.The findings show that the establishment of interventions for crisis management act as a significant mediator in the ability of a hotel leader to make essential changes relevant to the situation. Using the right skills for the right conditions in the midst of crisis, thus mitigate any negative impact on hotel performance. This demonstrates that Jordan\u2019s hotel sector leaders should make crisis management practices a continuous process in order to respond to challenges and prevent future crises. Crisis management has been identified as an effective tool that supplements traditional leadership styles in improving hotel sustainability during a crisis. The findings show that crisis management is compatible with different types of leadership styles, although we can see that the transactional leader has a stronger effect than a leader with a transformational leadership style. The study\u2019s conceptual framework contributes to the literature in the field of transformational leadership and transactional leadership as an independent variable, crisis management as a mediating variable, leader\u2019s experience as a moderating variable and hotel performance as a dependent variable in the context of the Jordanian and Middle Eastern hotel sectors. The findings highlight the role of both leadership types and crisis management in enhancing hotel performance in Jordanian hotels.7This study makes a significant contribution to the scientific body of knowledge and contributes theoretically by incorporating transformational and transactional leadership, crisis management and hotel performance in one model. This research could enable the creation of a much-needed foundation for future impactful studies in the domain of leadership and crisis management, as well as answer pertinent questions that future researchers can build upon. Furthermore, crisis management can be used to clarify the association between leadership styles and performance of hotels in the hotel sector and perhaps be extended to the entire tourism industry. Also, the literature reveals that researchers have largely ignored crisis management studies in empirical research in particular ,90. WhilLeadership effectiveness has been considered a most important issue for several organisations, both during normal times and in times of crisis, hence this study is a very significant addition to leadership literature due to the fact that leadership in crisis situations has been empirically studied. The findings of this study also prove beneficial to Contingency Theory and the Resource-based View Theory by addressing the existing void in the literature in relation to the application of Contingency Theory to effective leadership styles in a crisis situation while leveraging on organisations' resources to develop a competitive advantage in business . By applFirstly, the crisis measurement indicators consisting of 5 phases used in this present study are a valuable source of knowledge for hotel managers. It helps them to identify important areas where improvements are needed in their crisis management framework, as well as providing a pathway for hotel practitioners/owners to invest in necessary capabilities to enhance their crisis management strategies. Secondly, the findings can benefit other tourist sectors in Jordan by providing a base for further studies into leadership styles and the intermediary role of crisis management on performance. This can effectively and efficiently facilitate crisis management operations in other sectors such as the food and beverage industry, airline industry etc. Thirdly, managers are recommended to adopt a leadership style which ensures that existing resources will effectively be incorporated and maximized within the internal and external environment in order to meet organisational and environmental challenges. Also, in view of the results pertaining to a leader\u2019s experience in this study, it is important for hotel owners and human resource executives to proactively recruit and develop managers with significant experience to handle crisis events. They should also appoint managers for crisis management planning in the organisation. A significant criteria for this appointment should be the years of experience of the appointee. This is because this study, which supports the findings of earlier studies ,70,72, sDespite its importance, the study has several limitations. This study\u2019s cross-sectional design may limit the researcher\u2019s interpretation of the dynamism of crisis management practices and leadership styles over time. As a result, it is strongly advised that any future studies employ longitudinal analysis. Furthermore, the current study relied solely on subjective responses when gathering hotel data on crisis management and hotel performance, based on an adapted questionnaire survey instrument. This could be improved by supplementing subjective input as needed with objective company data from secondary sources. A future study could also look into contemporary leadership styles, such as autocratic, democratic, directive, supportive, affiliative, coaching, servant and laissez-faire, and how they affect crisis management practices.Amar Hisham Jaaffar: Conceived and designed the experiments; Contributed reagents, materials, analysis tools or data; Wrote the paper.Raed Hussam Alzoubi: Performed the experiments; Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data; Wrote the paper.Alkharabsheh Omar Hamdan Mohammad: Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data.Jegatheesan Rajadurai: Conceived and designed the experiments; Performed the experiments.The data that has been used is confidential.The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper." \ No newline at end of file