diff --git "a/deduped/dedup_0692.jsonl" "b/deduped/dedup_0692.jsonl" new file mode 100644--- /dev/null +++ "b/deduped/dedup_0692.jsonl" @@ -0,0 +1,41 @@ +{"text": "Analyses were performed on brain MRI scans from individuals who were frequent cannabis users in adolescence and similar age and sex matched young adults who never used cannabis . Cerebral atrophy and white matter integrity were determined using diffusion tensor imaging (DTI) to quantify the apparent diffusion coefficient (ADC) and the fractional anisotropy (FA). Whole brain volumes, lateral ventricular volumes, and gray matter volumes of the amygdala-hippocampal complex, superior temporal gyrus, and entire temporal lobes were also measured. While differences existed between groups, no pattern consistent with evidence of cerebral atrophy or loss of white matter integrity was detected. It is concluded that frequent cannabis use is unlikely to be neurotoxic to the normal developing adolescent brain. Cannabis abuse is considered a major public health problem worldwide and moreIn the early 1970's a controversial report was published in the Lancet concluding that cannabis caused cerebral atrophy as evidence in a pneumoencephalography study of a small group of male users . This waD, for each voxel, which models the water diffusion. The diffusion tensor can be further processed to compute the fractional anisotropy (FA), a normalized scalar measure of the degree of diffusion anisotropy within a voxel. FA reflects the degree of fiber organization, fiber directional coherence, or fiber integrity. White matter abnormalities with axonal disorganization might therefore be expected to have decreased FA. DTI may also be used to assess regional volume deficits using ADC-based morphometry , althouThus, the current report uses DTI to address whether cannabis affects normal brain structures and their white matter integrity during adolescence.Young adults (between the ages of 17 to 30) were recruited for this study either by use of a research normal volunteer pool at The Nathan S Kline Institute for Psychiatric Research (NKI) or by direct advertisement. The NKI volunteer pool was established in the research outpatient department to develop a cohort of community individuals who are available for participation in multiple human research studies. They are ascertained through advertisement in the local communities surrounding the institute. Others were recruited directly by advertisements for normal participants for research studies. Individuals were not recruited based on whether they had a history of cannabis use or not and advertisements did not mention cannabis. Cannabis users were consecutively admitted into the study if the cannabis use began prior to the age of 18 and consisted of use more than 21 times in any single year . This cut-off was chosen based on the cut-off already established by the structured interview format used . All 103 voxel size), a T2-weighted spin-echo image , diffusion weighted images , and one image without diffusion sensitizing gradients (b = 0).MRI scans were performed on a 1.5 T Siemens Vision system . Image sequences acquired included: a 3D magnetization-prepared rapid gradient echo (MPRAGE) image and the one volume without diffusion gradients (b = 0) were used to estimate a second order symmetric diffusion tensor D at each voxel [The processing steps for voxelwise analyses of the FA and ADC maps were exactly the same. The 8 DTI volumes with .3 \u00d7 40 slices). The segmented FA masks were used for global white matter FA determinations; The second image set was used for the evaluation of gray matter volumetric measures and were resliced using the original pixel sizes (1.2 mm3 \u00d7 172 slices). The slice editing software 3D Slicer [Volumetric measurements were obtained of whole brain, temporal lobe, the superior temporal gyrus, hippocampus and amygdala as a complex and cerebral ventricles These structures were chosen because they most often are related to psychotic experiences and memory. Since cannabis can occasionally lead to psychotic experiences and transient cognitive problems, focusing on these brain regions seemed warranted. For all volumetric measurements T1-weighted MPRAGE images were used. The T1 images were resliced into 2 image sets: One set to be used in automated tissue segmentation for white matter FA tissue masking; these co-registered images were resliced using the pixel dimensions of the FSE images where the ADC was reduced in cannabis users relative to non-users. These are shown in Fig. 3) where the FA was increased in cannabis users relative to non-users. These regions are shown in Fig. There were two regions and alsAlthough differences were observed between subjects who used cannabis during adolescence and those who did not, no finding indicated pathological change. Regions of higher ADC, putative evidence of atrophy, were not present, although regions of significantly lower ADC were. While low FA would be indicative of less white matter integrity, particularly with respect to fiber direction, all FA differences in this study were higher values in cannabis users than non-users.However, one limitation of the current study is its cross-sectional evaluation of subjects reporting on their own former adolescent cannabis use, rather than a longitudinal design following adolescents into adulthood to observe how the brain changes over time or alternatively a cross-sectional study of current cannabis-using adolescents. Pathological effects from prior frequent use may be less detectable in adulthood after time has passed and other changes have taken place to compensate for possible earlier effects of cannabis.In addition, although we suggest here that the ADC indicates the amount of CSF in extracellular tissue and ventricular space, we have not yet validated this assumption by direct comparisons and thus this view, while logical, remains speculative at present.Thus, these data lead to the likely conclusion that cannabis use, in at least moderate amounts, during adolescence does not appear to be neurotoxic, although we cannot exclude any adverse effects of heavier amounts than that used by the current subjects. These data are preliminary and need replication with larger numbers of subjects, although they do have implications for refuting the hypothesis that cannabis alone can cause a psychiatric disturbance such as schizophrenia by directly producing brain pathology."} +{"text": "DSM-IV criteria for cannabis dependence.Taxometric methods were used to discern the latent structure of cannabis dependence. Such methods help determine if a construct is categorical or dimensional. Taxometric analyses (MAXEIG and MAMBAC) were conducted on data from 1,474 cannabis-using respondents to the 2001\u20132002 National Epidemiologic Survey on Alcohol and Related Conditions (NESARC). Respondents answered questions assessing Both taxometric methods provided support for a dimensional structure of cannabis dependence.Although the MAMBAC results were not entirely unequivocal, the majority of evidence favored a dimensional structure of cannabis dependence. Diagnostic and Statistical Manual of Mental Disorders [DSM-IV implies that mental disorders form discrete categories. Many authors have criticized this assumption [DSM) present mental disorders as qualitatively distinct categories within which there is a certain amount of variation by degree. For example, in the current research context, a categorical view would posit that cannabis users who meet dependence criteria are qualitatively different from users who do not become dependent . A dimensional view would posit a continuous degree of cannabis dependence with users falling at various points along this continuum.No publication has been more influential in the study and treatment of psychopathology than the DSM-IV) . Criterisumption ,3. For elatent structure). Meehl and colleagues have developed a number of these statistical methods, known as taxometrics [Recent advances in statistical methods have provided a powerful tool for determining whether psychological constructs are categorical or dimensional such asometrics -9, anxieometrics , psychopometrics ,12, postometrics , schizotometrics , worry [ometrics , sexual ometrics , and marometrics .DSM suggests that the latent structure of cannabis dependence is categorical, this has not been empirically examined.To our knowledge, however, no study has investigated the latent structure of any of the substance dependence disorders. Thus, the current study sought to obtain empirical evidence as to whether there are qualitative differences between cannabis users who are dependent on the substance and those who are not, or whether cannabis dependence is better conceptualized as a dimension. Although the Why is the latent structure of cannabis dependence important? This distinction has important implications for understanding policy, treatment, and assessment of cannabis dependence. Regarding policy, if cannabis dependence truly forms a latent category, prevention and treatment resources should be directed primarily at those users who already meet criteria for dependence or are most likely to develop dependence. On the other hand, if cannabis dependence is dimensional, these same resources should target all users or users with a single symptom.DSM criteria to initiate treatment, a dimensional view would suggest that treatment could be most helpful if initiated as soon as one develops one or two dependence symptoms and before the onset of additional symptoms.Two recent cross-sectional population-based surveys of cannabis users found rates of dependence for young users at 7% and adult users at 21% ,19. GiveThe latent structure of cannabis dependence also has implications for assessment in research and clinical settings. If the distinction between dependent and non-dependent is a true category, then two-group comparisons between these categories might prove the most powerful and appropriate. Any variation within the dependent and non-dependent groups would likely be less relevant. In contrast, a dimensional model would suggest that continuous variables like symptom counts or severity of problems might be a more appropriate approach.DSM-IV criteria. By including such items as our indicators, we can be fairly confident that our conceptualization of cannabis dependence is highly similar, if not identical, to the conceptualization of cannabis dependence in the DSM-IV. Third, we included simulated comparison data sets in our analyses. Simulated data provide improved accuracy by taking into account the skew of the actual data. In the absence of simulated data, skewed data may lead one to erroneously infer the existence of a low frequency 'condition' believed to be a disease category .In the current study, we present evidence for the dimensional latent structure of cannabis dependence. The current research offers a number of strengths. First, our sample was obtained from the National Epidemiologic Survey on Alcohol and Related Conditions (NESARC). This fairly large sample is representative of the United States population. Cannabis users in our sample varied from everyday users to once in the last twelve months. By including a wide variety of cannabis users we enhanced our ability to detect a latent categorical structure, if such a structure truly exists (see ). Thus, DSM-IV criteria) were included in the taxon group. Those who endorsed zero to two symptoms were considered members of the complement group. Using this criterion resulted in a taxon group of 291 individuals and a complement group of 1,183. Thus, 20% of the sample was in the putative taxon group, twice as large as the 10% deemed sufficient in Monte Carlo studies to detect a categorical distinction, should one exist [We analyzed data from the 1,474 participants with complete data (31 missing cases). Participants who endorsed three or more of the seven symptoms of cannabis dependence . IdeallSD = .06 ,11. MoreSD = .06 .FitRMSR). Smaller values indicate better fit.Because taxometric methods do not rely on standard hypothesis testing, we used three criteria to judge the latent structure of cannabis dependence. First, we visually inspected the individual and averaged taxometric curves and compared them with the averaged curves for the simulated dimensional and taxonic data sets. Second, we evaluated the base rate estimates for each of the procedures. Taxonic results generally produce a narrow range of base rate estimates, whereas dimensional results generally produce a wide range of base rate estimates. Third, we examined a goodness-of-fit index to examine whether the research data more closely resemble the simulated taxonic or dimensional data. This statistic is the root mean square residual , also suggesting a dimensional structure. Furthermore, the FitRMSR value for the 10 simulated taxonic data was 3.13 times as large as the 10 simulated dimensional data sets, .058 and .019, respectively. Taken together, these results offer support for a dimensional structure of cannabis dependence.None of the 21 MAXEIG curves revealed a definitive inverted U shape, suggesting that the latent structure of cannabis dependence may be dimensional. The average curve failed to reveal an inverted U shape and actually revealed a somewhat concave shape. The average MAXEIG curve and the average curve for the dimensional and taxonic simulated data sets are presented in Figure M = .12, SD = .19), suggesting a dimensional structure. Also in support of a dimensional structure, the FitRMSR value for the taxonic data sets was 1.53 times as large as the value for the dimensional data sets, .12 and .08, respectively. In conclusion, these data provide further support for a dimensional latent structure of cannabis dependence.Only one of the curves had a taxonic appearance. The remaining curves were concave in appearance. The average curve also had a concave appearance, consistent with a dimensional latent structure. The average MAMBAC curve and the average curve for the 10 dimensional and 10 taxonic simulated data sets are presented in Figure DSM criteria for cannabis dependence as qualitatively distinct from other cannabis users, the current research provided support for the conceptualization of cannabis dependence as a dimension. Two nonredundant taxometric procedures (MAXEIG and MAMBAC) revealed an identical pattern of results in support of the dimensional conceptualization . Base rate estimates also varied widely, suggesting a dimensional structure. Moreover, the representativeness of our sample lends confidence to the generalizability of these results.This was the first study to our knowledge to use taxometric methods to assess the latent structure of a substance dependence disorder. Contrary to the prevailing trend to consider those who meet A dimensional structure has implications for treatment and prevention of cannabis dependence. Dimensional rather than categorical progress in treatment has considerable intuitive appeal. Gradual improvements and incremental decreases in problems might prove more likely than categorical shifts between problem use and complete abstinence. The dimensional model might also alter our approaches to prevention of cannabis dependence. Given the quantitative rather than qualitative differences between problematic and harmless use, strategies that target regular cannabis users who are uninterested in abstinence might have the potential to create considerable benefit. Finally, given this dimensional model, continuous measures of symptom severity might have more statistical power than dichotomous measures that simply assess group membership. Such measures may prove useful in research and treatment settings.Future studies could address other common substance dependence disorders such as alcohol, opiate, cocaine, and methamphetamine. Improved understanding of the latent structure of substance dependence has important implications for the treatment of substance users. If treatment is initiated at the early stages of symptom appearance, it is likely that substance abusers might be prevented from developing additional and potentially more incapacitating symptoms. For example, public service announcements could persuade cannabis users to seek treatment as soon as they find themselves using more than desired or developing a tolerance. It is likely that the prognosis would be favorable under these circumstances.M age = 31.22, SD = 11.03 years) who completed the 2001\u20132002 NESARC survey and reported using cannabis both during the past 12 months and earlier. In order to provide for an adequate proportion of dependent individuals, if participants had only used it in the past 12 months (and not earlier) they were not included. The NESARC is sponsored by the National Institute on Alcohol Abuse and Alcoholism (NIAAA) and contains a representative sample of the United States population. We thus had an adequate sample size (> 300 cases) to conduct taxometric analyses [SD = 11.8).Participants were 1,505 individuals questions assessing Since replication across more than one taxometric procedure lends confidence to our conclusions, our analytic strategy consisted of two taxometric methods developed by Meehl and colleagues: MAXEIG and MAMBAC ,5. TheseMAXEIG is a multivariate extension of the maximum covariance method (MAXCOV) . In the We also used the Mean Above Minus Below a Cut (MAMBAC) method as our second nonredundant taxometric procedure . The conDiagnostic and Statistical Manual of Mental DisordersDSM MAMBAC Mean Above Minus Below a CutMAXEIG Maximum EigenvalueNESARC National Epidemiologic Survey on Alcohol and Related ConditionsThe author(s) declare that they have no competing interests.TFD contributed to the study conceptualization, and writing, and performed the data analysis. ME contributed to the study conceptualization and writing. Both authors read and approved the final manuscript."} +{"text": "Melasma is a symmetric progressive hyperpigmentation of the facial skin that occurs in all races but has a predilection for darker skin phenotypes. Depigmenting agents, laser and chemical peeling as classic Jessner's solution, modified Jessner's solution and trichloroacetic acid have been used alone and in combination in the treatment of melasma.The aim of the study was to compare the therapeutic effect of combined 15% Trichloroacetic acid (TCA) and modified Jessner's solution with 15% TCA on melasma.Twenty married females with melasma , with a mean age of 38.25 years, were included in this study. All were of skin type III or IV. Fifteen percent TCA was applied to the whole face, with the exception of the left malar area to which combined TCA 15% and modified Jessner's solution was applied.Our results revealed statistically highly significant difference between MASI Score (Melasma Area and Severity Index) between the right malar area and the left malar area.Modified Jessner's solution proved to be useful as an adjuvant treatment with TCA in the treatment of melasma, improving the results and minimizing postinflammatory hyperpigmentation. Melasma is a symmetric progressive hyperpigmentation of the facial skin that occurs in all races but has a predilection for darker skin phenotypes. Melasma has been associated with hormonal imbalance, sun damage, and genetic predisposition. Clinically, melasma can be divided into centrofacial, malar, and mandibular, according to the pigment distribution on the skin. By Wood's light examination, melasma can be classified into epidermal, dermal or mixed type.Many depigmenting agents and other therapies such as chemical peeling are used for treating melasma, in the form of monotherapy or combined therapy.3 The mosThe Jessner's-trichloroacetic acid peel is a procedure developed by Dr. Gray Monheit (USA) to produce a safe, effective medium-depth chemical peel for the treatment of photoaged skin, actinic keratoses, and superficial acne scars.The aim of this work was to compare the efficacy of 15% TCA peeling as against the combined modified Jessner's and 15% TCA peeling in the treatment of melasma.Twenty married females with melasma, with a mean age of 38.25 years, were included in this study. All were of skin type III or IV. The duration of the melasma ranged from six months to 15 years, with a mean of 7.6 years.All the participants were subjected to Wood's light to determine the type of melasma . Only patients with epidermal type were included in this study. Melasma Area and Severity Index (MASI) score of the right and left cheeks were calculated for each patient at baseline, at the beginning of each peeling session, and at the end of follow up, along with photography. The patient did not know the types of peeling agents (written in the consent). Although the peeling was done by one doctor and the MASI score was done by another doctor, it was impossible to blind the two peeling agents because of the very characteristic odor and the absence of frost in the modified Jessner's solution. The MASI score was calculated by the following formula: Adding the sum of the severity ratings for darkness and homogeneity, multiplied by the numerical value of the areas involved and by the percentage of the malar area .A numerical value assigned for the corresponding percentage area (A) involved is as follows: (0) no involvement; (1) <10% involvement; (2) 10-29% involvement; (3) 30-49% involvement; (4) 50-69% involvement; (5) 70-89% involvement; and (6) 90-100% involvement.The darkness of the melasma (D) is compared to the normal skin and graded on a scale of 0 to 4 as follows: (0) for normal skin color without evidence of hyperpigmentation; (1) for barely visible hyperpigmentation; (2) for mild hyperpigmentation; (3) for moderate hyperpigmentation and (4) for severe hyperpigmentation. The homogeneity of the hyperpigmentation (H) was also graded on a scale of 0 to 4 as follows:(0) for normal skin color without evidence of hyperpigmentation; (1) for specks of involvement; (2) for small patchy areas of involvement <1.5 cm diameter; (3) for patches of involvement >2 cm diameter and (4) for uniform skin involvement, without any clear areas.Adults >18 years oldClinical diagnosis of melasmaMental capacity to give informed consentPregnant females and females on oral contraceptive pillsParticipants with a history of hypertrophic scars or keloidsParticipants with dermal or mixed melasmaParticipants with recurrent herpes infectionPresence of cutaneous infection.The participants were primed two weeks before starting the peel with Adapalene 0.1% gel, once/day.The participants were strictly instructed to apply 10% zinc oxide as a sun block, during and after therapy. Ten percent zinc oxide protects against UVA, UVB and visible light. SunscreeFifteen percent TCA (Delasco) was prepared by adding 85 ml of distilled water to 15gm of TCA (weight to volume preparation). Modified Jessner's solution (Delasco) preparation was formed with 8% citric acid (weight to volume), 17% lactic acid (weight to volume), 17% salicylic acid (weight to volume), and ethanol anhydrous.After cleaning and degreasing with alcohol, Modified Jessner's solution was applied before 15% TCA application to the left malar area only, until the appearance of erythema. Fifteen percent TCA was applied in one uniform coat to the whole face, for the sake of the participants, until frosting. The participant was then allowed to wash her face with alkaline soap. Then the participants applied mild corticosteroid cream for two days only and sun block continuously. Peeling settings was done every 10 days . Stopping peeling was done when clearance of melasma occurred or after a maximum of eight sessions, if the melasma did not completely disappear.All the participants were evaluated for recurrence of melasma, guided by the MASI score.P) for the \u201ct\u201d value with degrees of freedom, obtained from the statistical Tables, was directly supplied by the computer.Data analysis and calculations to produce a graphic presentation of important results were done using the statistical software package SPSS 12. Data were tabulated and statistically analyzed to evaluate the difference between the groups under study, with regard to the various parameters. Together, correlations were tried in between the essential studied parameters. The statistical analysis included; the arithmetic mean, standard deviation, standard error, and Student's \u201ct\u201d Test. The probability if P < 0.05, High significant (HS) if P < 0.01.Non significant (NS) if P = 0.000) .The average (mean) MASI score in the right malar area before treatment was 4.460 \u00b1 1.571, whereas after treatment (at the end of eight peeling sessions), the average MASI score changed to 2.040 \u00b1 1.326. So, the average decrease in MASI score was 2.420 . This reP = 0.000).At the end of the follow up period (eight weeks), the average MASI score raised to 2.270 \u00b1 1.400. The final average decrease in MASI score was 2.190. This represents a 49.1% decrease and was statistically highly significant (P = 0.000) .The average (mean) MASI score in the left malar area before treatment was 4.350 \u00b1 1.468, whereas after treatment (at the end of eight peeling sessions), the average MASI score changed to 1.230 \u00b1 0.808 . So, theP = 0.000).At the end of the follow up period (eight weeks), the average MASI score raised to 1.670 \u00b1 1.175; so the final average decrease in MASI score was 2.680. This represents a 61.6% decrease and was statistically highly significant [P value (0.002).Comparison between the mean MASI score after peeling in both cheeks showed highly significant (0.000) . CompariP value <0.01 highly significant [Discomfort: Chi square = 14.40, nificant .Melasma is an acquired hyperpigmentation caused due to an increase in the amount of melanin within melanocytes. Most often, it affects the forehead, malar eminences, upper lip, and chin. The hyperpigmented patches are usually symmetrical and have a sharp irregular border. Histologically, three forms exist . The epidermal type is the most responsive to treatment. Melasma The therapy for melasma has always been challenging and discouraging. The current treatments include hypopigmenting agents, chemical peels and laser. It is imChemical peeling has a low rate of complications and is popular due to the low costs involved and to a technique which is easy to learn. ChemicalThe gold standard for chemical peeling agents is TCA. It has been well studied and is versatile in its ability to create superficial, medium-depth and deep peels. It is stable, inexpensive and causes no systemic toxicity. It is easy to perform, as the peel depth correlates with the intensity of skin frost and there is no need to neutralize a TCA peel.In the treatment of melasma, a few studies were done using TCA as a chemical peeling agent. Some studies used TCA alone (up to 30%) in the classic full face peeling technique, or focalWhen the dermatologists initially began using TCA, the usual strength was 20% to 25%. Trichloroacetic acid (TCA) turned out to be not as predictable as phenol. With phenol peels, there is uniform penetration to a certain level, then the action stops. This is not the case with TCA: there are \u201chot spots\u201d where the TCA will penetrate deeper for no apparent reason. These hot spots are less troublesome as the concentration is decreased. The results with TCA are coat dependent , whereas the results with alpha-hydroxy acids are time dependent. The more coats that are used the deeper the peel; therefore, multiple coats of a 15% TCA can mimic the results of one or two coats of 35% TCA. These diGary Monheit has popularized the combination peel using the classic Jessner's solution combined with TCA, to achieve a more uniform penetration and an excellent peel with a low, safe concentration of TCA. First, the skin is prepared with an acetone/alcohol solvent and a cleanser (Septisol), before the application of Jessner's solution. This is followed with TCA 35%.2223 The 22In our study, based on the above data, we tried to avoid hyperpigmentation, which is a commonly observed side effect with 35% TCA, by using lower concentration of TCA (15% only). We also used modified Jessner's solution without resorcinol, instead of the classic Jessner's solution, to avoid possible allergic reactions and hyperpigmentation problems, which may be created by resorcinol, especially in skin types V and VI.Adapalene gel and TCA (15%) were applied on both sides of the face. Therefore, the difference in the MASI score can only be explained by the addition of modified Jessner's solution in the left malar area. With modified Jessner's solution as a pretreatment keratolytic, the epidermal barrier is altered prior to TCA peeling, to help more rapid and uniform uptake.During the peeling sessions, we noticed that frosting developed earlier in the left side. Also, peeling started earlier by several hours in the left side than in the right side (peeling started on the third day). Again, these differences can only be explained by the addition of modified Jessner's solution in the left malar area.As regards the side effects, they were the same in both sides, in the form of erythema, swelling, acne and folliculitis. Discomfort due to more burning sensation was more obvious with modified Jessner's solution, which subsided as frosting was completed, while PIH developed only with 15% TCA in two (10%) patients .On reviewing the literature, we found that there is one study which tried to evaluate the efficacy of chemical peeling performed with Jessner's solution and 35% TCA in melasma. Twenty four participants were included in the study. After a follow-up period of six months, the pigmentation degree and the extent of the melasma lesions were found to be reduced in 83.2% and 70.8% of the participants consecutively. The overall success rate was determined to be about 100% in epidermal and mixed types of melasma, whereas it only reached 42.8% in the dermal type. This is,By using modified Jessner's solution combined with TCA (15%), we decrease the risk of PIH that occurs commonly in dark races, following the TCA peels.From the historical point of view, in 2000, Cook total body peel\u2122 was introduced as the first combined peel. It was developed for reshaping the contour of chin, neck and tightening of the skin. It was designed to be safe for nonfacial skin. It is made of combination of 40% TCA and 70% glycolic acid gel.In our study, modified Jessner's solution proved to be useful as an adjuvant treatment with TCA in the treatment of melasma, improving the results and minimizing PIH.We recommend more studies on combined peel, using modified Jessner's solution with different concentrations of TCA (15% or others) and using larger samples of people and in other pigmentary disorders."} +{"text": "Flow stretch, optical and magnetic manipulation, single fluorophore detection and localization as well as combinations of different methods are described and the results obtained with these techniques are discussed in the framework of the current facilitated diffusion model.The maintenance of intact genetic information, as well as the deployment of transcription for specific sets of genes, critically rely on a family of proteins interacting with DNA and recognizing specific sequences or features. The mechanisms by which these proteins search for target DNA are the subject of intense investigations employing a variety of methods in biology. A large interest in these processes stems from the faster-than-diffusion association rates, explained in current models by a combination of 3D and 1D diffusion. Here, we present a review of the single-molecule approaches at the forefront of the study of protein-DNA interaction dynamics and target search At the most elemental level, all DNA biological functions are carried out by individual proteins that must interact with DNA to trigger molecular processes indispensable to the cell. Some examples include DNA replication, gene expression and its regulation, DNA repair, genome rearrangement by DNA recombination and transposition, as well as DNA restriction and modification by endonucleases and methyltransferases, respectively. A myriad of reasons make the study of protein-DNA interactions underlying these processes very captivating from both the biological and biophysical points of view. First, DNA is confined in a cellular compartment (in the nucleus in eukaryotic cells or nucleoid region in eubacteria), and the accessibility of DNA sequences to proteins is further restricted by the supercoiled structure of native DNA in eubacteria or by nucleosomes in eukaryotic chromatin. Moreover, to find specific binding target sequences and perform their activities, proteins must deal with the crowded environment of the cell and with the presence of roadblocks along the DNA chain . In fact, the mechanism by which proteins are able to find relatively small cognate sequences . In 197 M\u22121 s\u22121 . Second, M\u22121 s\u22121 \u201314). Alt M\u22121 s\u22121 , those f M\u22121 s\u22121 ,16\u201321 isfor LacI . In thisfor LacI ,24. In tfor LacI ).In a general description of all the potential processes occurring during facilitated diffusion , the teri.e., its target. Some of these processes, for example intersegmental transfer, require specific structural features such as the ability of binding simultaneously to multiple sites, but all of the most basic properties of the search process can be described in terms of association and dissociation rate constants (as distinguished between those relative to specific and non-specific DNA) and diffusion constants (both 3D and 1D). Once a protein undergoing free 3D diffusion, characterized by its D3D diffusion constant, collides with DNA and interacts with it , it will undergo two competing processes: sliding along the DNA, characterized by D1D diffusion constant, or dissociation, characterized by an off rate. The equilibrium between these two competing processes determines the average distance scanned by the protein along the DNA before dissociating and resuming free 3D diffusion. The measurement of the fundamental kinetic properties of the protein is the basis for understanding its target search mechanism. It can be easily understood that some of these parameters crucially depend on the DNA sequence, which determines the interaction energy landscape the protein explores in its interactions and during sliding along the DNA, and on DNA conformation and occupancy by other proteins. Thus, a complete picture describing these processes in the cell requires measurements in which each of these variables is independently controllable.The complex interplay of all these processes determines the rate at which a protein can scan through the excess non-specific DNA and find the needle in the haystack, The enhanced efficiency of target location by this ensemble of mechanisms becomes more intuitive if one considers for example the bacterial native structure of DNA. Considering the DNA persistence length (~150 bp), any DNA molecule in the cell behaves as a random coil and native DNA it is typically supercoiled or compacted into chromatin, which can facilitate the juxtaposition of distal sites along the DNA chain .As mentioned above, studies of these processes including theoretical modeling of the underlying mechanisms started as early as the 1970s and were developed mostly through conventional bulk biochemical measurements. In the last decade, the biophysical characterization of protein-DNA interactions and target search mechanisms has gained further momentum with the use of single-molecule methods. Among several advantages with respect to traditional bulk experiments, in which the behavior of individual proteins is obscured by ensemble averaging, single-molecule techniques permit probing of the dynamics of single biomolecules in real time \u201330. Recein vitro and in vivo experiments.This review describes the main contributions of single-molecule optical techniques to our current understanding of protein-DNA interactions, including a discussion of the state-of-the-art methodologies developed for both The tethered particle motion (TPM) assay, introduced by Jeff Gelles and coworkers at the beginning of the 90s ,33, is wThe TPM assay consists of tethering a microsphere to the microscope coverslip through a single DNA molecule as depicted in The application of flow to the DNA-bead complex creates a different geometry, resulting in an improved spatial/temporal resolution due to the suppression of the bead Brownian motion . MoreoveTechnically, TPM is a very simple method to implement, requiring only a microscope equipped with a camera for video recording. The data analysis methods are also simple and can be based on either centroid tracking or frameCompared to the methods illustrated below, TPM has the limitation of being applicable to proteins that cause detectable length changes in the DNA molecule . In the case of sliding, the protein must be immobilized to the surface in order for its mechanical activity to be detected through bead mobility, but surface immobilization is often detrimental to the enzyme activity. In this regard, other more complex techniques illustrated below offer a better option. Another disadvantage of TPM is the low time resolution, which is limited by the time required by the probe to explore the hemispherical region allowed by the DNA tether . A very TPM experimental design allows a straightforward combination with optical and magnetic tweezers. The latter techniques enable the application of controlled forces or torques on the biological molecules under study.E. coli RNA polymerase (RNAp), allowed Wang and co-workers to measure transcription velocity as a function of force at the single-molecule level . T. T155]. \u22123 \u03bcm2 s\u22121, considerably reduced compared to dye-labeled EcoRV. This effect may be due to the larger hydrodynamic radius of QDs with respect to the enzyme and conventional organic fluorophores [et al., QD labeling did not affect the biochemical activity of the enzyme, as confirmed by direct observation of DNA cleavage on an elongated DNA strand tethered to the surface. Moreover, they had strong indications that a slight overstretching of the DNA, at a 5% increase over its contour length, led to a significant decrease of the measured diffusion coefficient D1D, suggesting a subsequent change in the energetic landscape of sliding [Another recent study reports the combination of single-molecule fluorescence and 2OTs to directly visualize the sliding of restriction enzyme EcoRV labeled with a single Quantum dot (QD) along a DNA molecule held by 2OTs . QD nanorophores ,158. QDsOne of the biggest challenges in combining OTs and fluorescence microscopy is the dramatic reduction in fluorescence longevity due to the coincident irradiation with OT and excitation beams . The resLocalization Accuracy of DNA-Bound Fluorescent ProteinsN), the size of the detector pixel (a), the width of the PSF (s), and the background noise (b). Thompson et al. derived a relationship to determine the uncertainty (\u03c3\u03bc) in the localization of the fluorophore [The ability to resolve the position of an individual fluorescence emitter with high accuracy is well established. Imaging a fluorophore reveals a point source with a finite Airy disk point spread function (PSF). The center of mass of such a distribution represents the position of the fluorophore and can be determined with much higher precision than its width by performing a fit of an appropriate function, usually a two-dimensional Gaussian \u2013167. Theorophore :et al. tracked the movement of myosin V labeled with an organic dye along surface-immobilized actin filaments with ~1.5 nm position accuracy and 500 ms integration time [The first term in the equation represents the optical resolution of the microscope, the second term reflects the increase in the error due to the finite pixel size of the detector, whereas the last term takes into account the effect of background noise. One of the first successful implementations of single-molecule imaging with high localization accuracy was obtained in Selvin\u2019s lab to directly visualize the stepping mechanism of myosin V . Throughion time .et al. have recently addressed the capability of resolving the position of fluorescently labeled proteins as a function of DNA mechanical fluctuations at different end-to-end distances [sx) decreased from sx = 405 nm at tension << 1 pN to sx ~ 150 nm at tensions above few pN, thus yielding localization accuracies strongly dependent on applied tension (\u03c3\u03bcx~ 200 nm for F < 0.1 pN and \u03c3\u03bcx \u2264 10 nm to tensions above 1 pN). These measurements, performed with 1 s integration time, are in agreement with theoretical predictions of DNA thermal fluctuations using the equipartition theorem, showing localization accuracies limited only by the diffraction limit at tensions greater than 1 pN [In the dual trapping and fluorescence hybrid single-molecule assays described above, the localization precision of single, fluorescently labeled, DNA-bound proteins is further affected by thermal fluctuations. DNA is a semiflexible polymer with a persistence length of ~150 bp at physiological conditions. The long molecules employed in the dumbbell experiments, therefore, exhibit thermal fluctuations that depend on the tension applied to the molecule and are not usually negligible on the scale of nanometric protein localization. Candelli istances . As the han 1 pN .The localization of single molecules by fluorescence imaging demonstrates an intrinsic compromise between localization accuracy and temporal resolution. As discussed above, high localization accuracy requires a high number of photons collected , and thuWe recently developed a purely mechanical approach called ultrafast force-clamp spectroscopy to probe interactions between proteins and DNA . Our sysin vitro single-molecule strategies reported here and in the literature characterize protein-DNA interactions and target search mechanisms under conditions that are often quite far from physiological. This is understandably due to the required decrease in the level of complexity for in vitro assays and to the limitations of current single-molecule techniques. For example, elongated DNA molecules allow for easier localization and visualization of fluorescently labeled proteins but restrict DNA coiled conformations naturally occurring in vivo. Often low-salt buffers are used to promote DNA binding at the nM protein concentration regime required for single-molecule detection, which can also lead to longer trajectories and residence times. Additionally, diffusion coefficients are expected to depend on salt concentration when a hopping component is involved in the translocation mechanism, because high salt concentrations promote dissociation events from the DNA [E. coli proteins involved in architectural organization and transcriptional regulation [in vitro assays. Therefore, despite the outstanding technical improvements obtained in the last decades and the enormous contributions of the assays described above, it is still being debated as to what extent the measured parameters including sliding lengths, interaction kinetics, and the effects of DNA occupancy reflect the equivalent properties in vivo.Most the DNA ,127. Likgulation . Also thgulation , is not in vivo relies on overcoming the strong cellular autofluorescence background and the dispersion of the fluorescence signal throughout the whole cell arising from freely diffusing fluorescent molecules. Thus, common illumination strategies aim to reduce the detection volume to decrease noise from out-of-focus fluorescence and/or to perform 3D sectioning with a reasonable temporal resolution to image the whole cell body [The main challenge of probing protein-DNA interactions ell body .in vivo at the level of the single cell have been widely assessed through Fluorescence Recovery after Photobleaching (FRAP) and Fluorescence Correlation Spectroscopy (FCS). In FRAP [et al. used FRAP to quantify the interactions of an array of nearly 20 nuclear proteins with native chromatin in intact cells [et al., including structural proteins, remodeling factors, transcriptional coactivators, and transcription factors. The observed dynamic exchange of chromatin-associated proteins combined with the large population of bound molecules suggested a continuous scan of the genome for appropriate binding sites by 1D sliding and 3D diffusional hopping between chromatin fibers [The patterns of protein mobility In FRAP , a finit In FRAP and bind In FRAP of fluorct cells . The autn fibers . Nonethen fibers ,177. FRAn fibers and RNApn fibers ,180, amon fibers .in vivo to measure the binding kinetics and residence time [3D = 1.1 \u00b1 0.2 s and \u03c41D = 0.8 \u00b1 0.1 s and an effective diffusion coefficient of 0.6 \u03bcm2 s\u22121. The estimated mean target search time was 52 s, while the single-target search time of a single Mbp1 in the yeast nucleus was about 5 h, in good agreement with their endogenous conditions of approximately 350 copies of labeled Mbp1 proteins in the nucleus [FCS has also been applied to quantify the mobility and local concentration of proteins in living cells, by measuring the fluctuations in fluorescence intensity in a diffraction-limited spot \u2013184: thence time as well nce time of GFP-tnce time . These pnce time . They ob nucleus .in vivo probing of facilitated diffusion mechanisms and gene expression at single-molecule level, demonstrating with the lac operon an example of stochastic gene expression and correlating stochastic fluctuations with the binding behavior of LacI to DNA [E. coli strain expressing LacI dimers fused at the C-terminal with a fast-maturing (~7 min) yellow fluorescent protein (YFP) Venus [lac operon, LacI-Venus could be imaged as a diffraction-limited spot because the localized fluorescence could be detected above the autofluorescence background , LacI dissociates and thus the localized fluorescent foci disappear. These observations allowed Elf et al. to measure the kinetics of binding and dissociation of LacI to the operator in response to environmental signals. The search time of a single LacI to reach a vacant operator was about 270 s, with an estimated mean duration of non-specific interactions of about 5 ms [in vivo was measured as 0.4 \u03bcm2 s\u22121, considerably higher than the D1D ~0.05 \u03bcm2 s\u22121 obtained in their in vitro assay performed on flow-extended DNA. The authors interpreted the higher diffusion coefficient obtained in vivo as a consequence of the facilitated diffusion of the protein to locate the operator sequences. The authors implemented FCS to determine the 3D diffusion of a LacI mutant lacking the DNA-binding domain. The evaluated 3D diffusion coefficient was D3D ~3 \u03bcm2 s\u22121, which led to the estimation that LacI spends 87% of the time sliding along non-specific DNA and only 13% freely diffusing in the cytoplasm [Sunney Xie\u2019s group made a breakthrough step towards the I to DNA ,188,189.P) Venus . When bockground . Upon adout 5 ms . Moreoveytoplasm .et al. engineered a strong autorepressor system to limit the number of LacI per cell and to maintain a low and even expression of LacI-Venus at 3 to 5 dimers per cell. The association rate of LacI to the individual operator site was thus determined after removing the inducer IPTG. The authors found an average search time of about 56 s, corresponding to an approximate time of 3\u20135 min required for a single repressor dimer to bind the operator. To directly determine the sliding length of LacI along non-specific DNA sequences in vivo, Hammar et al. made several E. coli strains containing two identical operator sequences at different interoperator distances, similar to the bulk in vitro assays [Elf and colleagues obtained the first direct experimental observation of LacI facilitated diffusion to locate the operator sequences in living bacterial cells with single-molecule sensitivity , with a o assays . Brieflyo assays . To addro assays .in vitro to the in vivo realm has already demonstrated great potential and will undoubtedly represent one of the most exciting areas of technological development for the study of protein-DNA interactions.The mechanisms of protein-DNA interactions have been investigated for many years using a variety of microscopy techniques, but questions remain. The importance of these processes in the normal and pathological workings of the cell warrants the widespread interest in all methods suitable for probing protein-DNA interactions and their dynamics. In this review, we have provided a description of the single-molecule methods recently developed in this research field. The extension of these methods from the"} +{"text": "Patients with metastatic melanoma have a very unfavorable prognosis with few therapeutic options. Based on previous promising experiences within a clinical trial involving carboplatin and paclitaxel a series of advanced metastatic melanoma patients were treated with this combination.Data of all patients with cutaneous metastatic melanoma treated with carboplatin and paclitaxel (CP) at our institution between October 2005 and December 2007 were retrospectively evaluated. For all patients a once-every-3-weeks dose-intensified regimen was used. Overall and progression free survival were calculated using the method of Kaplan and Meier. Tumour response was evaluated according to RECIST criteria.61 patients with cutaneous metastatic melanoma were treated with CP. 20 patients (85% M1c) received CP as first-line treatment, 41 patients (90.2% M1c) had received at least one prior systemic therapy for metastatic disease. Main toxicities were myelosuppression, fatigue and peripheral neuropathy. Partial responses were noted in 4.9% of patients, stable disease in 23% of patients. No complete response was observed. Median progression free survival was 10 weeks. Median overall survival was 31 weeks. Response, progression-free and overall survival were equivalent in first- and second-line patients. 60 patients of 61 died after a median follow up of 7 months. Median overall survival differed for patients with controlled disease (PR+SD) (49 weeks) compared to patients with progressive disease (18 weeks).Among patients with metastatic melanoma a subgroup achieved disease control under CP therapy which may be associated with a survival benefit. This potential advantage has to be weighed against considerable toxicity. Since response rates and survival were not improved in previously untreated patients compared to pretreated patients, CP should thus not be applied as first-line treatment. Melanoma is an increasingly common disease, and its incidence still rises in the industrialized countries with white populations. Although primary cutaneous melanomas are frequently curable by surgical excision, metastatic melanoma carries a poor prognosis with a median survival ranging from 6 to 12 months, and has not improved during the last three decades. In the US 8700 patients are expected to die of metastatic melanoma in the year 2010 Metastatic melanoma is a solid tumour that is relatively resistant to systemic treatment Combined chemotherapy with carboplatin and paclitaxel (CP) is a well established treatment regimen in advanced non-small-cell lung cancer and in advanced ovarian cancer Our patients received CP in case of tumour progression after one or more prior systemic treatments or in case of primarily rapidly progressive disease. The aim of this retrospective analysis was to investigate the effectiveness of CP in advanced melanoma patients in terms of overall survival and response and to compare the results between first and second line treatment.All patients with advanced metastastic melanoma of cutaneous origin receiving CP at our institution between October 2005 and December 2007 were included. Patients with melanoma of ocular origin were excluded. Approval for this retrospective analysis was obtained by the Ethics commitee Tuebingen, German . Patient data of our own institution were analyzed anonymously, therefore we did not obtain informed consent. This approach was in accordance with the advice of our ethics committee. Approval for this study was gained retrospectively.2 plus intravenous carboplatin at area under curve 6 (AUC 6) on day 1 of a 21-day cycle, with a dose reduction after the fourth cycle to carboplatin AUC 5 and paclitaxel 175/mg/m2. Some patients in poor general condition or insufficient myelofunction received a reduced dose from the start of treatment. All patients who received at least one cycle were included in the analysis.Based on the treatment schedule of the second-line CP plus sorafenib trial all patients received intravenous paclitaxel 225 mg/mrd cycle (every 9 weeks). Tumour response was evaluated using Response Evaluation Criteria in Solid Tumours (RECIST) criteria Tumour evaluation was based on CT or PET-CT scans, which were obtained after every 3Statistical analyses were performed with the statistical software SPSS 15.0 . In order to check comparability, first-line and second-line group of patients were compared for the characteristics gender, age, disease classification, brain metastases, liver metastases, number of organs involved, ECOG and LDH level prior to therapy. Bivariate statistical testing was performed using two-sided Chi-square tests. P-values of less than 0.05 were considered statistically significant. Follow up was measured from start of treatment until death or last date of observation. Progression-free survival (PFS) was defined from start of treatment to first documented disease progression. Overall survival (OS) was defined from the start of treatment to the date of death. Non melanoma related deaths were included as censored events. Survival probabilities were calculated according to Kaplan-Meier and compared with log-rank test statistics A total of 61 patients were identified for evaluation. Patient characteristics are shown in 2). The median number of cycles of therapy delivered was four (range 1 to 20). 13 patients (21.3%) received only one cycle of therapy due to clinical disease progression, intolerability or death.The majority of patients (82%) received the full dosage at start of CP treatment, 18% received the already reduced level (carboplatin AUC 5 and paclitaxel 175/mg/mDose limiting toxicities (grade III and IV) were myelosuppression and peripheral neuropathy. Other frequent toxicities included alopecia and fatigue. In four patients (6.6%) hypersensitivity reactions to paclitaxel occurred. In all of this four patients a rapid desensitization protocol according to a scheme proposed by Lee et al was used to continue therapy All 61 patients had measurable disease by RECIST criteria and were assessable for response. Response rates are shown in There were no significant differences between patients receiving CP as first-line therapy and second-line therapy regarding S100 levels, LDH levels, ECOG performance status, number of organs involved and presence of brain metastases. But the chemo-naive group of patients included significantly more male (p\u200a=\u200a0.008) and young patients (p\u200a=\u200a0.011). Among the 22 patients with brain metastases none showed an objective response upon treatment with CP, however stabilisation of disease was observed in 5 patients. After a median follow up of 7 months 60 of 61 patients had died. Median overall survival was 49 weeks for patients with controlled disease compared to 18 weeks (IQR \u200a=\u200a [10. 35]) for patients with progressive disease (p\u200a=\u200a0.001) .There were no significant differences between patients with controlled and progressive disease regarding number of organs involved, presence of brain metastases, presence of liver metastases, age, gender and S100 levels. In contrast, a LDH value over two-fold-upper normal limit at start of CP treatment was significantly associated with progressive disease during therapy (p\u200a=\u200a0.009). Decreasing or constant LDH levels under therapy were associated with a prolonged overall survival (p\u200a=\u200a0.002).The present patient collective consisted of patients with clearly progressive metastatic melanoma presenting with widespread metastatic disease. Two third of patients had already received a first-line chemotherapy mainly consisting in dacarbazine-based treatments. First-line patients with extensive metastatic disease were primarily treated with CP because the caring physicians felt that they will be unlikely to respond to dacarbazine. Response to CP treatment was low with five percent of partial responses as well in first-line as in second-line treatment situations. However, 23% of patients achieved stable disease which contributed obviously similarly to prolongation of survival. Thus, temporary disease control was attained in 28% of patients and seemed to be associated with prolongation of survival to 49 weeks as compared to 18 weeks in patients with progressive disease. The CP regimen showed transient disease stabilisations but neither complete responses nor long-term durable responses were accomplished. In one patient the treatment with CP enabled a complete resection of remaining metastases but the patient recurred afterwards. The only patient alive achieved stable disease under CP treatment and was included in a clinical trial with an anti-CTLA 4 antibody, then achieved a CR and has no evidence of disease to date.Toxicities were manageable in all cases but dose limiting toxicities like myelosuppression and peripheral neuropathy occurred as already described in other tumour entities Several studies investigated the combination of CP in metastatic melanoma with different treatment schedules, results and conclusions In the current study additionally first-line patients were included. Survival curves for OS and PFS were remarkably identical for the cohorts of patients receiving CP as first and second-line treatment. It is noteworthy to mention that only patients with rapidly progressive disease in the first-line situation have been included into this protocol.The only observed significant difference between patients with controlled and progressive disease was the level of LDH at treatment start. LDH may therefore be considered as a predictive factor for response. It seems to be more likely that factors like tumour load (associated with LDH-level) and number of organs involved but not the aggressiveness and sequence of the applied chemotherapeutic schedules predict treatment responses.CP therapy in metastatic melanoma has cytostatic effects with achievement of disease control for limited time periods in about one third of patients treated. Complete remissions or durable responses have not been accomplished. It seems not to be a better alternative to dacarbazine treatment in the first-line therapy, and should preferentially be applied as second-line treatment. A response to therapy may be associated with a prolonged overall survival. The indication for CP therapy has to be considered on an individual basis and has to be weighed against considerable toxicity."} +{"text": "We propose a model that explains the reliable emergence of power laws during the development of different human languages. The model incorporates the principle of least effort in communications, minimizing a combination of the information-theoretic communication inefficiency and direct signal cost. We prove a general relationship, for all optimal languages, between the signal cost distribution and the resulting distribution of signals. Zipf\u2019s law then emerges for logarithmic signal cost distributions, which is the cost distribution expected for words constructed from letters or phonemes. Zipf\u2019s law postulates a power-law distribution for languages with a specific power law exponent \u03b2, so if st is the t-th most common word, then its frequency is proportional to\u03b2 \u2248 1. Empirical data suggests that the power law holds across a variety of natural languages [\u03b2 can vary, depending on the language and the context, with a usual value of \u03b2 \u2248 2 [Zipf\u2019s law for natuanguages , but theof \u03b2 \u2248 2 . While tof \u03b2 \u2248 2 .Several papers \u20137 suggesIf we reject the idea that Zipfian distribution are produced as a result of a process that randomly produces words, then the next logical step is to ask what models can produce such distributions and agrees with our basic assumptions about language? Mandelbrot models lAn alternative model by Cancho and Sol\u00e9 follows Thus, to our knowledge, the question of how to achieve power laws in human language from the least effort principle is still not satisfactorily solved. Nevertheless, the idea from , 12 to eWe should also point out that a power-law often is not the best fit to real data . Howeverpost hoc reconstruct necessarily which mechanism resulted in the observed power law. We believe, however, that it is nevertheless useful to develop a mathematically rigorous version of such a mechanism applicable to languages in particular, as it would provide additional explanatory capacity in analyzing structures and patterns observed in languages [Another important consideration is that there in general may be multiple mechanisms generating power laws, and one cannot anguages , 16.The resulting insights may be of interest beyond the confines of power-law structures and offer an opportunity to study optimality conditions in other types of self-organizing coding systems, for instance in the case of the genetic code . The sugWe will use a model, similar to that used by Ferrer i Cancho and Sol\u00e9 , which cn signals S and a set of m objects R. Signals are used to reference objects, and a language is defined by how the speaker assigns signals to objects, i.e. by the relation between signals and objects. The relation between S and R in this model can be expressed by a binary matrix A, where an element ai,j = 1 if and only if signal si refers to object rj.The model has a set of polysemy , and synonymy, where multiple signals refer to the same object. The relevant probabilities are then defined as follows:\u03c9j is the number of synonyms for object rj, that is \u03c9j = \u2211iai,j. Thus, the probability of using a synonym is equally distributed over all synonyms referring to a particular object. Importantly, it is also assumed that si leaves little ambiguity as to what object rj is referenced, so there is little chance that the listener misunderstands what the speaker wanted to say. In the model of Ferrer i Cancho and Sol\u00e9 [si is expressed by the conditional entropy:This model allows one to represent both us model each lanand Sol\u00e9 , the cosHS, which is, as the term in n:\u03bb as follows:\u03bb \u2264 1.The effort for the speaker is expressed by the entropy \u03bb given by energy function that a communication system must minimize [I = HR \u2212 HR\u2223S captures the communication efficiency, i.e. how much information the signals contain about the objects. This energy function better accounts for subtle communication efforts [HS is arguably both a source of effort for the speaker and the listener because the word frequency affects not only word production but also recognition of spoken and written words [I also implicitly accounts for both HS\u2223R (a measure of the speaker\u2019s effort of coding objects) and HR\u2223S . It is easy to see thatHR is constant, e.g. under the uniformity condition \u03bb.It can be shown that the cost function \u03a9minimize , 18\u03a9\u03bb0= efforts , since Hen words . The comHS\u2223R + HR\u2223S [I in measuring the \u201cdisagreements\u201d between variables, especially in the case when one information source is contained within another [We propose instead another cost function that not only produces optimal languages exhibiting power laws, but also retains the clear intuition of generic energy functions which typically reflect the global quality of a solution. Firstly, we represent the communication inefficiency by the information distance, the Rokhlin metric, + HR\u2223S , 21. Thi another .c(si), which assigns each signal a specific cost. The signal usage cost for a language is then the weighted average of this signal specific cost:Secondly, we define the signal usage effort by introducing an explicit cost function \u03bb \u2264 1 trading off the efforts as follows:p = p is the joint probability. A language can be optimized for different values of \u03bb, weighting the respective costs. The extreme case (\u03bb = 0) with only the signal usage cost defining the energy function is excluded, while the opposite extreme (\u03bb = 1) focusing on the communication inefficiency is considered. Following the principle of least effort, we aim to determine the properties of those languages that have minimal cost according to The overall cost function for a language First of all, we establish that all local minimizers, and hence all global minimizers, of the cost function Theorem 1.Each local minimizer of the functionwhereandis specified by the\u03bb \u2264 1, can be represented as a functionf : R \u2192 Ssuch thatp in expression A which is given in terms of function f as follows:f precludes multiple signals s referring to the same object r. That is, each column in the minimizer matrix has precisely one non-zero element. Polysemy is allowed within the solutions.c(s) and the resulting distribution p(s).We need the following lemma as an intermediate step towards deriving the analytical relationship between the specific word cost Lemma 2.For each solutionpminimizing the functionHS\u2223R = 0, while HR = 1 under the uniformity constraint Corollary 3.R\u2223S + HSIf n = m, H = 1.f : R \u2192 S has the property HS\u2223R = 0, we reduce the Using this lemma, and noting that each such solution represented as a function p(si), under the constraint \u2211p(si) = 1, yields the extremality condition\u03ba\u2032. The minimum is achieved whenmi such that \u2211mi = m. The last condition ensures that the minimal solutions p(si) correspond to functions p . In other words, the marginal probability p that represents a minimizer matrix under the uniformity constraint Varying with respect to p(si) = \u03bae\u03b2c(si)\u2212 which would then allow for arbitrary cost functions c(s).Under the condition c(si), while the parameter \u03b2 is, thermodynamically, the inverse temperature. It is well-known that the Gibbs measure is the unique measure maximizing the entropy for a given expected energy, and appears in many solutions outside of thermodynamics [Interestingly, the optimal marginal probability distribution dynamics \u201325.\u03bb = 0.5, and n = m, the solution simplifies to \u03b2 = 1 and Let us now consider some special cases. For the case of equal effort, i.e. c(si) = ln \u03c1i/N, where \u03c1i is the rank of symbol si, and N is a normalization constant equal to \u03c1i/N = m). In this case, the optimal solution is attained when\u03b2, specified by \u03b2 depends on the system\u2019s size (n and m) and the efforts\u2019 trade-off \u03bb. Importantly, this derivation shows a connection between scaling in languages and thermodynamics: if the signal usage cost increases logarithmically, then the scaling exponent of the resulting power law is given by the corresponding inverse temperature.Another important special case is given by the cost function \u03b2 = 1) is then nothing but a special case for systems that satisfy \u03bb = 0.5. The importance of equal cost was emphasized in earlier works [n or m), and so the resulting power law \u201cadapts\u201d to linguistic dynamics and language evolution in general.Zipf\u2019s law in terms of the usage cost c(s), yielding Zipf\u2019s law when this cost is logarithmically distributed over the symbols.In summary, the derived relationship expresses the optimal probability To explain the emergence of power laws for signal selection, we need to explain why the cost function of the signals would increase logarithmically, if the signals are ordered by their cost rank. This can be motivated, across a number of languages, by assuming that signals are in fact words, which are made up of letters from a finite alphabet; or in regard to spoken language, are made of from a finite set of phonemes. Compare , in whica then has a unique one letter words which the approximate cost of one, a2 two letter words with an approximate cost of two, a3 three letter words with a cost of three, etcetera. If we rank these words by their cost, then their cost will increase approximately logarithmically with their cost rank. To illustrate, Lets assume that each letter (or phoneme) has an inherent cost which is approximate to a unit letter cost. Furthermore, assume that the cost of a word roughly equals the sum of its letter costs. A language with an alphabet of size \u03bb yields power laws, where \u03b2 varies with changes in the weighting factor. This is in contrast to the model in [\u03b2 values that deviate from the \u03b2 value of their base language, which could indicate that the effort of language production or communication efficiency is weighted differently in these cases, resulting in different optimal solutions, which are power laws with other values for \u03b2.This signal usage cost can be interpreted in different ways. In spoken language it might simply be the time needed to utter a word, which makes it a cost both for the listener and the speaker. In written language it might be the effort to write a word, or the bandwidth needed to transmit it, in which case it is a speaker cost. On the other hand, if one is reading a written text, then the length of the words might translate into \u201clistener\u201d cost again. In general, the average signal usage cost corresponds to the effort of using a specific language to communicate for all involved parties. This differs from the original least effort idea, which balances listener and speaker effort . In our model in , where pmodel in Cancho dIcost = \u2212HS + \u2329log s\u232a + log N, where \u2329log s\u232a = \u2211p(si)log(si), and log(si) is interpreted as the logarithm of the index of si . Their argument that this cost function follows from a more general cost function HR\u2223S = \u2212I + HR, where HR is constant, is undermined by their unconventional definition of conditional probability (cf. Appendix A [N(s) is the number of objects to which signal s refers. This definition not only requires some additional assumptions in order to make p(r\u2223s) a conditional probability, but also implicitly embeds the \u201ccost\u201d of symbol s within the conditional probability p(r\u2223s), by dividing it by s. Thus, we are left with the cost function Icostper se, not rigorously derived from a generic principle, and this cost function ignores joint probabilities and the communication efficiency in particular.We noted earlier that there are other options to produce power laws, which are insensitive to the relationship between objects and signals. Baek et al. obtain apendix A ). SpecifHS subject to a constraint \u2329log s\u232a = \u03c7, for some constant \u03c7. Again, this maximization produces a power law, and again we may note that the cost function and the constraint used in the derivation do not capture communication efficiency or trade-offs between speaker and listener, omitting joint probabilities as well.A very similar cost function was offered by Visser , who sugHS + \u2329log s\u232a is equivalent to the cost function HR\u2223S \u2212 HS\u2223R + \u2329log s\u232a, under constant HR. This expression reveals another important drawback of minimizing \u2212HS + \u2329log s\u232a directly: while minimizing HR\u2223S reduces the ambiguity of polysemy, minimizing \u2212HS\u2223R explicitly \u201crewards\u201d the ambiguity of synonyms. In other words, languages obtained by minimizing such a cost directly do exhibit a power law, but mostly at the expense of potentially unnecessary synonyms.Finally, we would like to point out that the cost function \u2212conventionality and contrast (\u201cspeakers take every difference in form to mark a difference in meaning\u201d) combine in providing some precedence to semantic overlaps, leading children to eventually accept the parents\u2019 word for a semantically overlapping concept. The principle of transparency explains how a preference to use a more transparent word helps to reduce ambiguity in the lexicon. It has also been recently shown that the exponent of Zipf\u2019s law (when rank is the random variable) tends to decrease over time in children [There may be a number of reasons for the avoidance of synonyms in real languages. While an analysis of synonymy dynamics in child languages or aphasiacs is outside of scope of this paper, it is worth pointing out that some studies have suggested that the learning of new words by children is driven by synonymy avoidance . As the children . The stuRegarding synonyms it should also be noted, that while they exist, their number is usually comparatively low. If we are looking at a natural language, which might have ca. 100.000 words, we will not find a concept that has 95.000 synonyms. Most concepts have synonyms in the single digits, if they have any. The models that look at just the output distribution could produce languages with such an excessive number of synonyms. In our model the ideal solution has no synonyms, but the existing languages, which are constantly adapting, could be seen as close approximations, where out of 100.000 possible synonyms, most concepts have only very few synonyms, if any. As noted earlier, while precise logarithmic cost functions would produce perfect power-law distributions, natural languages do not fit Zipf\u2019s law exactly but only approximately.HS + \u2329log s\u232a can no longer be justified.These observations support our conjecture that, as languages mature, the communicative efficiency and the balance between speaker\u2019s and listener\u2019s efforts become a more significant driver, and so the simplistic cost function \u2212HR\u2223S + HS\u2223R + \u2329log s\u232a reduces to \u2212HS + \u2329log s\u232a only after minimizing over the joint probabilities p. Importantly, it captures communication (in)efficiency and average signal usage explicitly, balancing out different aspects of the communication trade-offs and representing the concept of least effort in a principled way. The resulting solutions do not contain synonyms, which disappear at the step of minimizing over p, and so correspond to \u201cperfect\u201d, maximally efficient and balanced, languages. The fact that even these languages exhibit power (Zipf\u2019s) laws is a manifestation of the continuity of scale-freedom in structuring of languages, along the refinement of cost functions representing the least effort principle: as long as the language develops closely to the optima of the prevailing cost function, power laws will be adaptively maintained.In contrast, the cost function proposed in this paper In conclusion, our paper addresses the long-held conjecture that the principle of least effort provides a plausible mechanism for generating power laws. In deriving such a formalization, we interpret the effort in suitable information-theoretic terms and prove that its global minimum produces Zipf\u2019s law. Our formalization enables a derivation of languages which are optimal with respect to both the communication inefficiency and direct signal cost. The proposed combination of these two factors within a generic cost function is an intuitive and powerful method to capture the trade-offs intrinsic to least-effort communication.Theorem 1.Each local minimizer of the functionwhereandis specified by the\u03bb \u2264 1, can be represented as a functionf : R \u2192 Ssuch thatIn order to prove this theorem, we establish a few preliminary propositions (these results are obtained by Nihat Ay).The extreme points of Proposition 2.The set wherefis a functionR \u2192 S.Proof. Consider the convex setf : j \u21a6 i. More precisely, each extreme point has the structure\u03c6 : A = (ai\u2223j)i,j to the probability vector\u03c6((1 \u2212 t) A + tB) = (1 \u2212 t) \u03c6(A) + t\u03c6(B). Therefore, the extreme points of S = {s1, \u2026, sn} of signals with n elements and the set R = {r1, \u2026, rm} of m objects, and denote with S \u00d7 R) the set of all probability vectors p, 1 \u2264 i \u2264 n, 1 \u2264 j \u2264 m. We define the following functions on S \u00d7 R):Consider the set Proposition 3.All three functionsHR\u2223S, HS\u2223R, andthat are involved in the definition ofare concave inp. Furthermore, the restriction ofHS\u2223Rto the set Proof. The statements follow from well-known convexity properties of the entropy and the relative entropy.(1)HR\u2223SConcavity of : We rewrite the function HR\u2223S asHR\u2223S now follows from the joint convexity of the relative entropy (2)Concavity ofS\u2223RH: The concavity of HS\u2223R follows by the same arguments as in (1). We now prove the strict concavity of its restriction to HR\u2223S now follows from the strict concavity of the Shannon entropy.(2)c\u232aConcavity of \u2329: This simply follows from the fact that \u2329c\u232a is an affine function and therefore concave and convex at the same time.\u03bb \u2264 1, we now consider the functionWith a number 0 < Corollary 4.For 0 \u2264 \u03bb \u2264 1, the functionis concave inp, and, if\u03bb > 0, its restriction to the convex setis strictly concave.We have the following direct implication of Corollary 4.Corollary 5.Let 0 < \u03bb \u2264 1 and letpbe a local minimizer of the mapThenpis an extreme point ofProof. This directly follows from the strict concavity of this function.Together with Proposition 2, this implies Theorem 1, our main result on minimizers of the restriction of \u03c6 : \u03c6 \u2218 \u0131. From Proposition 2 it follows that the extreme points of \u03c6 \u2218 \u0131. Furthermore, Corollary 5 implies that all local, and therefore also all global, minimizers of \u03c6 \u2218 \u0131. The previous work of Ferrer i Cancho and Sole [We finish this analysis by addressing the problem of minimizing and Sole refers tCorollary 6.A pointp \u2208 is a global minimizer ofif and only if it is in the image of\u03c6 \u2218 \u0131and (\u03c6 \u2218 \u0131)\u22121(p) globally minimizes"} +{"text": "This was prevented by co-incubation of STBEV-incubated arteries with LOX-1 blocking antibodies . Pre-incubation of the vessels with a nitric oxide synthase inhibitor (L-NAME) demonstrated that the STBEV-induced impairment in vasodilation was due to decreased nitric oxide contribution , which was abolished by LOX-1 blocking antibody . In STBEV-incubated vessels, LOX-1 inhibition resulted in an increased endothelial nitric oxide synthase expression (p<0.05), to a level similar to control vessels. The oxidant scavenger, superoxide dismutase, did not improve this impairment, nor were vascular superoxide levels altered. Our data support an important role for STBEVs in impairment of vascular function via activation of LOX-1 and reduced nitric oxide mediated vasodilation. Moreover, we postulate that the LOX-1 pathway could be a potential therapeutic target in pathologies associated with vascular dysfunction during pregnancy.Syncytiotrophoblast extracellular vesicles (STBEVs) are placenta derived particles that are released into the maternal circulation during pregnancy. Abnormal levels of STBEVs have been proposed to affect maternal vascular function. The lectin-like oxidized low-density lipoprotein receptor-1 (LOX-1) is a multi-ligand scavenger receptor. Increased LOX-1 expression and activation has been proposed to contribute to endothelial dysfunction. As LOX-1 has various ligands, we hypothesized that, being essentially packages of lipoproteins, STBEVs are able to activate the LOX-1 receptor thereby impairing vascular function via the production of superoxide and decreased nitric oxide bioavailability. Uterine arteries were obtained in late gestation from Sprague-Dawley rats and incubated for 24h with or without human STBEVs in the absence or presence of a LOX-1 blocking antibody. Vascular function was assessed using wire myography. Endothelium-dependent maximal vasodilation to methylcholine was impaired by STBEVs (MCh E They are variable in size, ranging from smaller exosomes and ectosomes 50\u2013150 nm) to larger extracellular vesicles (100 nm\u20141 \u03bcm) 50 nm to , 3. Whilfunction while otfunction . Althougin vitro, which was LOX-1 dependent [The lectin-like oxidized low-density lipoprotein receptor-1 (LOX-1) is the main receptor involved in the uptake of oxidized low-density lipoprotein (oxLDL) and it has been well-studied in cardiovascular diseases such as atherosclerosis and has ependent , 18. Furependent . Moreoveependent .In addition to oxLDL, many other factors have been shown to be ligands for LOX-1 such as: other modified lipoproteins, activated platelets, apoptotic cells and even bacteria , 20. As et al. [-1 and frozen until their use in experiments. Written informed consent was obtained.All animal experiments were conducted at the University of Alberta, Canada, and were approved by the University of Alberta Health Sciences Animal Policy and Welfare Committee in accordance with the Canadian Council on Animal Care Guidelines (AUP #242). The study protocol for human placentae was approved by the Oxfordshire Research Ethics Committee C and STBEV isolations were conducted in Prof. Ian Sargent\u2019s laboratory at Oxford University, U.K. STBEVs were derived from the placenta according to their standard methods described in detail in the manuscript by Dragovic et al. . In shoret al. . Pelletsad libitum access to food and water. The presence of sperm in a vaginal smear following overnight mating with a male rat was designated as gestational day 0 of pregnancy. On gestational day 20, rats were sacrificed by exsanguination under inhaled isoflurane anesthesia. Main branch uterine arteries were isolated and cut into 2 mm pieces without side branches. Multiple 2 mm uterine artery segments were incubated for 24 hours at 4\u00b0C (as adapted from similar experiments published by others [-1), 3) STBEVs (200 \u03bcg ml-1 in PSS), or 4) STBEVs (200 \u03bcg ml-1 in PSS) together with LOX-1 blocking antibodies . The STBEV concentration was based on previous studies [Three-month-old female Sprague Dawley rats were housed under a standard day:night cycle (10:14 hours) with y others ) in each studies . There w-1, Sigma-Aldrich; with washout between doses) and once to methylcholine (MCh) following the second phenylephrine dose, to ensure intact endothelial and smooth muscle function. To assess the NO contribution to vasodilation, arteries from each experimental group were pre-incubated for 30 minutes with or without N-nitro-l-arginine methyl ester hydrochloride . To assess the influence of superoxide production on vascular function, control and STBEV incubated arteries were pre-incubated for 30 minutes with or without superoxide dismutase . Following incubation, arteries were pre-constricted with phenylephrine (3 \u03bcmol L-1) and vasodilator responses to MCh (0.1 nmol L-1 to 100 \u03bcmol L-1) were measured. Finally, to investigate endothelium-independent vasodilator function, arteries were pre-constricted with phenylephrine (3 \u03bcmol L-1) and responses to the exogenous NO donor sodium nitroprusside were assessed.After 24 hours of incubation, segments of uterine artery were mounted on a wire myograph . Arteries were twice exposed to phenylephrine (10 \u03bcmol L-1) was added for 30 minutes at 37\u00b0C. Afterwards, sections were washed thrice with HBSS (2 minutes each), covered with a coverslip, and fluorescent images were taken immediately.Frozen sections of uterine artery (n = 8) were cut into 9 \u03bcm sections and stained for the presence of superoxide using dihydroethidium staining. DHE reacts with superoxide to produce ethidium, which generates a red fluorescence that can be quantified. In short, arterial sections were thawed to room temperature for one minute and washed three times with Hanks\u2019 Balanced Salt Solution for 2 minutes each. Sections were then incubated with HBSS for 10 minutes at 37\u00b0C; which was then removed and diluted DHE solution expression, nitrotyrosine levels and LOX-1 expression in frozen uterine artery sections (9 \u03bcm) were measured using immunofluorescent staining. In short, sections were fixed in ice-cold acetone (-20\u00b0C) for 10 minutes and allowed to dry for another 10 minutes. Sections were washed 3 times for 5 minutes with phosphate buffered salt solution and incubated with blocking solution (2% BSA in PBS) for 60 minutes at room temperature. Subsequently, the blocking solution was aspirated and sections were incubated with anti-eNOS antibodies , anti-nitrotyrosine antibodies or anti-LOX-1 antibodies in 2% BSA in PBS overnight at 4\u00b0C. The next day, sections were washed with PBS 3 times for 5 minutes and incubated with secondary goat-anti-rabbit Alexa Fluor 546 (Cy3 wavelength) labeled antibodies in 2% BSA in PBS for 60 minutes at room temperature in the dark. Sections were then washed with PBS 3 times for 5 minutes, mounting medium with DAPI was added and sections were covered and allowed to dry. Images were taken on the following day.Images of DHE, eNOS, nitrotyrosine and LOX-1 stained sections of uterine artery were taken using an Olympus IX81 fluorescence microscope with cellSens Dimensions software (Olympus). DHE eNOS, nitrotyrosine and LOX-1 mean staining intensity of the whole vessel was analyzed using ImageJ software. When two arterial segments were present in a sample, an average of the two mean intensities was taken.Statistical analyses were performed using GraphPad Prism software 6.0f . All data were tested for normality using the Shapiro-Wilk normality test. Myography data were summarized as percent maximal vasodilation or area under curve (AUC) and presented as mean \u00b1 standard error of the mean. Statistical analysis was performed for comparisons between control arteries and arteries exposed to STBEVs in the absence or presence of the LOX-1 blocking antibody or pegSOD using a two-way ANOVA with Bonferroni multiple comparisons post hoc test. The contribution of NO to endothelial vasodilation was quantified by calculating the delta change in AUC between arteries exposed to L-NAME and the controls and compared between the groups using a one-way ANOVA and Dunnet\u2019s post hoc test. Comparisons of DHE staining between control arteries and arteries exposed to STBEVs or STBEVs + LOX-1 blocking antibody were analyzed using a nonparametric Kruskal-Wallis test with Dunn\u2019s post hoc analysis. Comparisons of vascular responses to SNP and eNOS expression between control arteries and arteries exposed to STBEVs or STBEVs + LOX-1 blocking antibody were analyzed using a one-way ANOVA with Dunnet\u2019s post hoc test. For all statistical tests, differences were considered significant if p<0.05.All drugs used for myography protocols were purchased from Sigma-Aldrich . The STBEVs were collected in Prof. Ian Sargent\u2019s laboratory and were derived according to their standard methods. The LOX-1 blocking antibodies (TS20) were developed and supplied by Prof. Sawamura\u2019s laboratory. DHE was purchased at Biotum, Inc. Hayward . HBSS was purchased from Life Technologies . The eNOS antibodies were obtained from Santa Cruz Biotechnologies, and the secondary goat-anti-rabbit Alexa Fluor 546-labeled antibodies from Molecular Probes/Thermo Fisher Scientific .Maximal MCh-induced vasodilation was reduced in STBEV-incubated uterine arteries . The addInhibition of nitric oxide synthase (NOS) by L-NAME reduced maximal vasodilation to MCh in both control and STBEV-exposed (in the absence or presence of LOX-1 blocking antibodies) vessels . The conTo distinguish whether the effects of STBEVs on NO-mediated vasodilation was the result of effects on altered endothelial or vascular smooth muscle function, we analyzed vascular responses to SNP, an exogenous NO donor. We found that vascular (smooth muscle) responses to SNP were not significantly different between control arteries and those exposed to STBEVs, or STBEVs in the presence of LOX-1 inhibition .LOX-1 receptor activation has been shown to increase superoxide production in diseased vasculature , 14. ConLOX-1 staining was performed on uterine artery sections to check whether STBEV stimulation had any effect on LOX-1 expression. No changes were observed in LOX-1 expression between the experimental groups [-1 that we utilized in our current experiments. This higher concentration of STBEVs for a shorter (24 hours) duration of exposure was used to enable us to assess the possible role of LOX-1 in an ex vivo bioassay and is in-line with other previous studies [et al. [During pregnancy, the maternal vasculature is constantly exposed to circulating STBEVs over several months of gestation. The concentrations of STBEVs measured in plasma from pregnant (and preeclamptic) women (20\u2013100 ng ml-1) \u201338 are l studies . We used [et al. ) to ensuAs STBEVs are heterogeneous , it may From a clinical perspective, endothelial dysfunction is a key point of convergence underlying many pathologies; however, the exact mechanism of how placental circulating factors affect the maternal vasculature is still under investigation. In this study, we have provided evidence that STBEVs play a role in the vascular dysfunction. STBEVs, containing vesicles and exosomes derived from placental syncytiotrophoblasts, impaired endothelial vasodilation and were associated with reduced NO bioavailability via the LOX-1 receptor. Not only does this increase our collective understanding of the vascular pathophysiology, but it also provides insight into potential therapeutic strategies by targeting the LOX-1 pathway.S1 FigNo differences in nitrotyrosine levels were observed between all of the experimental groups. Bars represent means \u00b1 SEM; two-way ANOVA. ns = not significant. n = 6\u20137/group.(PDF)Click here for additional data file.S2 FigNo differences in uterine artery LOX-1 expression were found between the experimental groups. Bars represent means \u00b1 SEM; two-way ANOVA. ns = not significant. n = 6\u20137/group.(PDF)Click here for additional data file.S1 File(ZIP)Click here for additional data file."} +{"text": "The ability to engineer the thermal conductivity of materials allows us to control the flow of heat and derive novel functionalities such as thermal rectification, thermal switching and thermal cloaking. While this could be achieved by making use of composites and metamaterials at bulk length-scales, engineering the thermal conductivity at micro- and nano-scale dimensions is considerably more challenging. In this work, we show that the local thermal conductivity along a single Si nanowire can be tuned to a desired value with high spatial resolution through selective helium ion irradiation with a well-controlled dose. The underlying mechanism is understood through molecular dynamics simulations and quantitative phonon-defect scattering rate analysis, where the behaviour of thermal conductivity with dose is attributed to the accumulation and agglomeration of scattering centres at lower doses. Beyond a threshold dose, a crystalline-amorphous transition was observed. Manipulating the flow of heat at the nanoscale is difficult because it requires the ability to tune the thermal properties of tiny structures. Here, the authors locally change the thermal conductivity of an individual silicon nanowire by irradiating it with helium ions. The thermal conductivity of bulk materials can be engineered by various means such as the use of composite materials3456789101215131922Another approach to engineer the thermal conductivity of a Si nanowire without affecting its surface morphology is through ion implantation. By introducing impurity atoms into the crystalline silicon nanostructures, the thermal conductivity can be tuned due to enhanced phonon-defect scattering and the damage can be annealed away as well262725In this paper, we show that the thermal conductivity of an individual Si nanowire can be locally changed by irradiating it with helium ions with well-defined doses at different positions along its length (see Methods section for detailed sample preparation). Because of the small nanowire diameter (\u223c160\u2009nm) and moderately high helium ion energy 30\u201336\u2009keV), the helium ions pass through the nanowire with minimal forward scattering and energy loss, and their effect is reasonably uniformly distributed over the nanowire cross-section, unlike the case of a much thicker Si substrate. The thermal conductivity of each irradiated nanowire segment was then measured by an electron beam heating technique that is capable of spatially resolving the thermal conductivity along the nanowire\u2019s length\u201336\u2009keV, 16\u2009cm\u22122). From this figure, we can see that as the electron beam scans from the left to right across the damaged portion, the temperature rise of the left (right) sensor, \u0394TL (\u0394TR), undergoes obvious decrease (increase), indicating that the thermal resistivity of the damaged portion is much larger than that of the intrinsic portion. This is further confirmed by the increase in slope of the Ri(x) curve within the damaged portion, where Ri(x) is the cumulative thermal resistance from the left sensor to the heating spot. It is observed that the power absorption of the nanowire from the electron beam, (\u0394TL+\u0394TR)/Rb, where Rb is the equivalent thermal resistance of the suspension beams, does not vary significantly at the damaged portion, indicating negligible material removal from inside the nanowire. The Ri(x) curves for all the portions with different doses are plotted in A is the cross-sectional area of silicon nanowire with diameter d=160\u2009nm as measured in the TEM. To minimize the error, Ri(x) curve of the damaged portion and the intrinsic portion, with the beginning and ending 50\u2009nm of these portions stripped off to avoid the non-uniformity in the dose near the boundaries arising from ion forward scattering. As the thermal conductivity is derived from the gradient (The silicon nanowire is suspended between two temperature sensors comprising platinum (Pt) loops on silicon nitride membranes, each of which is suspended by six nitride beams. \u22121\u2009K\u22121, obtained by linearly fitting several undamaged portions of the measured samples and taking the average. The amorphous limit of \u223c1.7\u2009W\u2009m\u22121\u2009K\u22121 (at 300\u2009K) is taken from the literature31Plotted in 16\u2009cm\u22122), the thermal conductivity drops rapidly initially and then tapers off as the dose increases, whereas in the regime of dose \u22653 \u00d7 1016\u2009cm\u22122, the thermal conductivity approaches an asymptotic value. Of particular note is a steep jump in thermal conductivity between these two regimes, as shown in the inset of Clearly two regimes can be identified in d=160\u2009nm); last, from simulation results, 98% of the helium ions penetrate the sample, and one helium ion can create 33 defects on average, while the number of residual (embedded) helium atoms is much smaller than the number of damaged lattice sites.The as-implanted damage was calculated by Monte Carlo simulations based on a binary collision approach TRIM/SRIM. The simHowever, at room temperature, the as-produced point defects are not stable5 (cm\u22121) \u00d7 dose (cm\u22122), with details discussed in detail in An upper bound for the concentration of defects can be roughly estimated by assuming that 10% of the Si vacancies created by helium ions survive. For calculation purposes, we take it that the Si nanowire is circular with a diameter of 160\u2009nm, based on our experimentally imaged and measured cross-section. The number of vacancies remaining is a function of dose and this relation is finally described as 2.0 \u00d7 1022 Si atoms per cm3, for a dose of 1 \u00d7 1016\u2009cm\u22122, around 4% of the Si atoms are displaced, acting as scattering centres for phonons. This value provides an upper bound of the phonon scattering centres because we use the upper bound of the survival rate (10%). Moreover, there is a consensus view that the stable defects left in the room-temperature silicon are predominantly divacanciesAs there are 5 \u00d7 10The inverse phonon life time (phonon relaxation rate) contributed by substitutional point defects for a simple cubic lattice is c is the point defect concentration per atom, \u0394M the mass difference between the substitutional atom and the host atom, M the mass of host atom, a the lattice constant and v the phonon group velocity3941et al.2Te3 lattice introduced by alloying with InSb can substantially reduce the lattice thermal conductivity (by \u223c80%).where C is the specific heat, v is the average group velocity assuming a linearized phonon dispersion model and \u03c4 is the phonon relaxation time. We further assume that \u03c4\u22121 can be calculated from Matthiessen\u2019s rule, which implies that different phonon scattering mechanisms are independent of each other. As a result, the thermal resistivity (1/\u03ba) is additive from various scattering mechanisms, including \u03ba should be linearly related to the concentration of point defects, as \u03ba curve as a function of dose , the point defects agglomerate due to room temperature annealing. Last, the parameter D is about 1\u20133 orders of magnitude larger than the corresponding parameter for isotope scatteringIt is possible to circumvent the linearized phonon dispersion assumption and extract the value of parameter defects , we may To further examine the role of point defect scattering, and to incorporate the effect of phonon-boundary scattering, NEMD simulation was carried out for qualitative comparisons with the experimental results. In the simulation, Si nanowires with various cross-sections of 3 \u00d7 3, 6 \u00d7 6 and 9 \u00d7 9 unit cells (0.543\u2009nm per unit cell) and fixed length of 20 unit cells were considered . Point d\u03ba curve as a function of vacancy concentration is capable of decreasing the nanowire thermal conductivity by nearly 70%. It should be noted that under our experimental conditions of high helium ion energy, thin sample and same dose rate and temperature, the thermal conductivity for a particular dose is repeatable for all the samples measured.From the experimental and simulation results, we can see that in the low-dose regime, point defects are effective in changing the thermal conductivity of the helium-irradiated Si nanowire. First, the thermal conductivity initially decreases drastically as the dose (and the corresponding density of point defects) increases; however further increasing the dose yields diminishing return on reducing thermal conductivity, that is, to obtain the same marginal decrease of thermal conductivity, one has to at least double the dose inset of . Second,RT) of the nanowire became progressively smaller. The results from spatially resolved thermal conductance measurement show that this decrease of RT is due to the increase of the irradiated portion\u2019s thermal conductivity, while the thermal conductivity of the non-irradiated portion remained the same , the thermal conductivity had almost recovered to that of the non-irradiated portions. This is further illustrated by Ri(x) curve becomes a straight line, and the irradiated and non-irradiated potions are indistinguishable from each other.The before- and after- annealing results are plotted in 14\u2009cm\u22122), nearly all the divacancies migrate to sinks and disappear, so that the thermal conductivity has practically fully recovered, while at higher doses, divacancies agglomerate, forming clusters which continue to act as scattering centres, so that the thermal conductivity of these portions could not be recovered. It has been shown that the predominant point defects\u2014divacancies\u2014become mobile only at 200\u2013300\u2009\u00b0C, but at lower temperatures, divacancies could be annihilated by interstitials that have dissociated from clusters and diffused through the lattice. It was previously found that by annealing a proton-irradiated Si substrate at 150\u2009\u00b0C for \u223c60\u2009h, the divacancy concentration can be reduced by 54% ref. . This an16\u2009cm\u22122 , using a focused electron beam with a spot size of 200\u2009nm. The non-irradiated portion of the nanowire shows diffraction spots of single-crystal Si (inset(c)); for the portion with irradiation dose of 7.5 \u00d7 1016\u2009cm\u22122, the nanowire is fully amorphous, leaving only short-range order among Si atoms, as indicated by the Debye-Scherrer rings16\u2009cm\u22122, both Debye-Scherrer rings and diffraction spots can be seen (inset(d)), indicating the coexistence of both amorphous and crystalline phases. Similar diffraction patterns for Sample #2 can also be seen in 16\u2009cm\u22122 is much higher than that for larger doses at the dose of 2.5 \u00d7 1016\u2009cm\u22122, where both crystalline and amorphous phases coexist. However, the amorphous region rapidly takes over the entire volume for higher doses, such as the doses of 3.5 \u00d7 1016\u2009cm\u22122 and 4 \u00d7 1016\u2009cm\u22122, as shown in 16\u2009cm\u22122, the thermal conductivity is close to the amorphous limit. Our experiment also demonstrates that helium ion irradiation can amorphize the Si nanowire without changing its morphology. Moreover, the interface between the irradiated area (for dose of 7.5 \u00d7 1016\u2009cm\u22122) and non-irradiated area is particularly sharp and distinguishable, which corresponds well to the thermal resistance measurement by the e-beam technique. This clear boundary confirms the well-controlled dose irradiation with definition of a few nanometres. The interface thermal resistance of these abrupt boundaries can be measured, and can be exploited to further reduce the thermal conductivity of an intrinsic nanowire by incorporating multiple such boundaries, but this is beyond the scope of the present study. This amorphization process can be visualized from the diffraction pattern of an irradiated Si nanowire prepared separately on a TEM grid. In this paper, we have demonstrated that the thermal conductivity of an individual Si nanowire can be changed by selective helium ion irradiation. A single Si nanowire was irradiated at different positions with well-controlled helium ion doses, and an electron beam heating technique was used to measure the local thermal conductivity along the nanowire, which was then related to helium ion dose.16 and 2.5 \u00d7 1016\u2009cm\u22122 from the thermal conductivity versus dose curve. This result suggests a novel method to amorphize a Si nanowire without affecting its morphology. Moreover, within the dose regime in which only point defects are created, the Si nanowire thermal conductivity decreases drastically as the dose increases, and merely \u223c4% defects could reduce the thermal conductivity by \u223c70%, indicating a strong phonon scattering effect by the point defects. Within this regime, the effective scattering centres, inferred from parameter D, initially increase linearly with dose, and then saturates for larger dose. Finally, we observed that annealing could improve the thermal conductivity of the damaged nanowire, and at 300\u2009\u00b0C for 2\u2009h, the thermal conductivity of a damaged portion irradiated with a dose of 1 \u00d7 1015\u2009cm\u22122 could be recovered to the value of the non-irradiated case.We observed a clear transition from crystalline Si to amorphous phase at a dose between 1.5 \u00d7 102/Si substrate. A single Si nanowire was picked up from the substrate by a nano-manipulator (Kleindiek MM3A-EM) with a sharp tungsten probe and placed on a prefabricated suspended METS device. The METS device comprises two silicon nitride membranes, each of which is suspended by six long suspension beams for thermal isolation. Integrated on top of the silicon nitride platforms are Pt loops acting as resistance temperature sensors. The nanowire was positioned so as to bridge the two sensors, with the two ends fixed onto the two sensors by the electron beam-induced deposition (EBID) of Pt-C composite. The EBID process was carried out using a focused electron beam with acceleration voltage of 30\u2009kV and current of 1.3\u20135\u2009nA.Silicon nanowires with diameter \u223c160\u2009nm (Sigma-Aldrich 730866) grown along the [111] direction dispersed in 2-propanol solution were drop-casted onto a SiOThe on-device Si nanowire was then cleaned using an Evactron RF plasma cleaner attached to the SEM chamber operated at a power of 14\u2009W in 0.4\u2009Torr of air for 2\u2009h, after which it was put into a helium ion microscope (Zeiss Orion Plus) chamber and pumped overnight. It was then irradiated by helium ions with different doses at different positions . This wa16\u2009cm\u22122) chosen is such that there is no significant sputtering of Si atoms, and a change in the Si nanowire diameter is not observable in a transmission electron microscope (TEM). The dose was then reduced for each irradiated position. The irradiation length and irradiation distance are different for all the measured silicon nanowires, the details of which are summarized in Eight samples were irradiated under similar helium ion energy and current (or dose rate) at room temperature. The SEM images of samples #2 to #8 are shown in Sample #3 was annealed in air at 120\u2009\u00b0C for cumulative periods of 20, 40, 80 and 160\u2009h, with measurement by an electron beam heating technique carried out for each interval. After this, it was put into a tube furnace and annealed in forming gas at 300\u2009\u00b0C for 2\u2009h. To check for possible oxidation during annealing, HRTEM and diffraction pattern measurements of the post-annealed nanowire were carried out and the results are shown in TL and \u0394TR) of the left and right sensors corresponding to the position of the electron beam x as measured from the left sensor, the cumulative thermal resistance Ri(x) between the electron beam location and the left sensor isIn the electron beam heating technique, a focused electron beam is used as a localized heating source. By moving the electron beam along the nanowire and measuring the temperature rise (\u0394\u03b1i(x)=\u0394TL(x)/\u0394TR(x), and Rb is the equivalent thermal resistance of the six beams that link each suspended membrane to the environment as measured by the conventional thermal bridge method in which a 10\u2009\u03bcA DC current was passed through the left Pt loop, raising its temperature (\u0394TL0) by \u223c8.5\u2009K, and the temperature rise of both sensors (\u0394TL0 and \u0394TR0) were measured from the resistance change of the Pt loops. Finally, \u03b10=\u0394TL0/\u0394TR0.where Ri(x) curve, the spatially resolved thermal conductivity of the nanowire can be calculated as: A=\u03c0d2/4 and d is the diameter of the nanowire.From the The data that support the findings of this study are available from the corresponding authors upon request.How to cite this article: Zhao, Y. et al. Engineering the thermal conductivity along an individual silicon nanowire by selective helium ion irradiation. Nat. Commun.8, 15919 doi: 10.1038/ncomms15919 (2017).Publisher\u2019s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations."} +{"text": "Evaluating the similarity of different measured variables is a fundamental task of statistics, and a key part of many bioinformatics algorithms. Here we propose a Bayesian scheme for estimating the correlation between different entities\u2019 measurements based on high-throughput sequencing data. These entities could be different genes or miRNAs whose expression is measured by RNA-seq, different transcription factors or histone marks whose expression is measured by ChIP-seq, or even combinations of different types of entities. Our Bayesian formulation accounts for both measured signal levels and uncertainty in those levels, due to varying sequencing depth in different experiments and to varying absolute levels of individual entities, both of which affect the precision of the measurements. In comparison with a traditional Pearson correlation analysis, we show that our Bayesian correlation analysis retains high correlations when measurement confidence is high, but suppresses correlations when measurement confidence is low\u2014especially for entities with low signal levels. In addition, we consider the influence of priors on the Bayesian correlation estimate. Perhaps surprisingly, we show that naive, uniform priors on entities\u2019 signal levels can lead to highly biased correlation estimates, particularly when different experiments have widely varying sequencing depths. However, we propose two alternative priors that provably mitigate this problem. We also prove that, like traditional Pearson correlation, our Bayesian correlation calculation constitutes a kernel in the machine learning sense, and thus can be used as a similarity measure in any kernel-based machine learning algorithm. We demonstrate our approach on two RNA-seq datasets and one miRNA-seq dataset. A fundamental task in data analysis is to assess the relatedness of different measured variables. In bioinformatics, for example, we may want to know which genes have similar patterns of expression across conditions , 2, whicHigh-throughput sequencing (HTS), of course, has become a central technology driving molecular biology research. For example, it was the basis for the high-profile ENCODE and modENCODE projects aimed at understanding gene regulatory networks on a large scale \u201311. SimiR to denote the read count matrix, with Ric denoting the counts attributed to entity i under condition c, and Rc denoting the total counts for condition c . How counts are attributed to entities depends on the nature of the data, and is not something we address. Often the counts are whole numbers, 0, 1, 2, \u2026, although sometimes fractional counts are used when attribution is uncertain. We will require only that the counts are non-negative, and we will use the term \u201ccount\u201d even if some matrix entries are fractional.It is important to realize, however, that the precision of sequencing-based measurements varies for different experiments and for different things being measured in those experiments. To understand this, consider that in many cases the outcome of a set of sequencing experiments can be summarized in a matrix of count data. The rows of the matrix correspond to different biological entities being measured. For instance, in an RNA-seq dataset, each row may correspond to a different gene whose expression is being measured, or perhaps different transcripts. In a miRNA-seq dataset, each row corresponds to a different microRNA. In a ChIP-seq dataset, each row may represent a different region of the genome. The columns of the matrix typically correspond to different conditions or factors being measured. For example, they might represent different tissues or cell types in which expression is being measured, different drug treatments, or different factors that are being assayed by ChIP-seq. Some or all of the columns might be replicate measurements of the same condition. In this paper we will use R, we can identify two important influences on precision. One is that the total reads sequenced and attributed, Rc, can vary by condition c. The greater Rc is, that is, the deeper the sequencing, the greater the precision with which every entity is being measured under that condition. To make an analogy, if we think of measuring a physical object with a ruler, and we can choose between a ruler with fewer marks on it and one with more marks\u2014more, finer gradations\u2014then we can make more precise measurements using the ruler with more gradations . On the other hand, imagine i is a high-count entity, say receiving Ric = 105 reads out of Rc = 106. If we were to replicate the experiment, we would not expect to see as few as 104 reads or as many as 106. In fact, we might expect only about a percentage point varability in the measured level. . In essence, because the marks on the ruler\u2014individual reads\u2014are of a fixed size regardless of the size of the object being measured, large objects are measured with much greater relative precision. Or, to put it yet another way, the signal-to-noise ratio is much better for high-count entities than for low-count entities.Another infuence involves the asbolute level of the entity being measured. For instance, suppose an entity 6 sequencing depth, so that normalization is not an issue. . Suppose that a gene x has counts across the three conditions, gene y has counts and gene z has counts . A naive Pearson correlation analysis would show that all three genes are perfectly (r = 1) correlated with each other. Yet, we know that z\u2019s measured counts are more likely to be non-representative than those of x or y. Its count of 1 under the third condition could be a fluke, as could its zero counts in the first two conditions. Thus, we should have more confidence in the xy correlation than the xz correlation or the yz correlation. More subtly, we should have more confidence in the xz correlation than the yz correlation because of our greater relative precision in measuring x than y\u2014even though we shouldn\u2019t have high confidence in either of these correlations. To solve the \u201cproblem\u201d of apparent correlations with gene z, we could adopt some heuristic rule of discarding genes with low counts. But, exactly what consititues a low count? Is it the average count across conditions, or the total read counts, or the maximum count, or the difference between maximum and minimum counts? And what is the threshold for discarding or keeping a gene? Whatever threshold is chosen, there will always be some genes that only just beat the threshold, and yet those will be treated identically to genes where measurements are of high precision. What is needed is a more principled and more graded way of discounting evidence based on the precision of the individual measurements.How do these considerations on precision influence our ability to estimate similarity between different entities across conditions? Let us consider a simple example. Imagine that we have measured gene expression across three different conditions, each with exactly 106, 107 and 106. Further, suppose two genes, w and x, have observed counts , while two other genes, y and z, have observed counts . The Pearson correlation of w and x is r = 1, as is the correlation of y and z. This is true with or without normalization to library size, and in fact, all four gene are expressed at the same level, relative to library size. Yet, intuitively, we ought to be more confident in the yz correlation than in the wx correlation, because the higher sequencing depth in condition two means its measurements have greater precision. In the analysis of gene expression in particular, there is a history of noise modeling both for individual conditions and across replicates . There are even gene pairs where the Pearson correlation is negative (say around \u22120.3) and the Bayesian correlation is positive (say around +0.3).i, sorted in order of increasing total read count, the ratio of the mean absolute Bayesian correlation with all the other genes to the mean absolute Pearson correlation with all other genes:Ric = 0 for all c. As stated above, we have removed such genes from consideration. But, the analysis of their Bayesian correlations is particularly clear. With a uniform prior, the posterior mean estimate of pic = Rc) \u2248 1/Rc. Thus, despite the lack of any evidence regarding entity i, the posterior mean estimate can go up or down depending on the sequencing depth Rc of the experiment. When two different entities, i and j \u2260 i, both have zero counts, their posterior means will be correlated. Indeed, in Eqs Rc\u2019s are very different from the mean, the numerator of Consider, as a special case, an entity with strictly zero counts in every condition: m is bigger than two. Thus, for any read, the prior probability that it comes from gene i should be In order to avoid the problem presented by the uniform prior, we present two possible alternatives. Clearly the process of attributing reads to entities is not one of flipping a coin. In any practical problem the number of entities Ric = 0 for all c, or Rjc = 0 for all c, then Finally, we present a third prior specifically designed to avoid the problem of spurious correlations between zero-count entities : In While the behavior of Bayesian correlation with second or third priors on the Wang dataset is satisfying, it is important to know whether there is any generality to the results. To further evaluate performance, we tested the method on two additional datasets. One is an RNA-seq gene expression time-series take at four days spanning an erythropoiesis differentiation protocol, with each day sampled in duplicate . We treaet al. [The third dataset we tried is a miRNA-seq study of the expression of 1143 precursor and mature microRNAs in 172 normal and cancerous human tissues, available in Table S5 of Landgraf et al. . The resIn order to establish the generality of these observations, we analyzed Bayesian correlations with different priors theoretically. The following Theorem, proved in Appendix A, shows that the second and third priors have properties of correctly suppressing correlations between low-count entities for any dataset.Theorem 1Let R be the count matrix for m entities under k conditions. Let i and j \u2260 i be two entities. Let ni = maxcRic, and similarly for nj. Let k1, k2, k3and k4be, respectively, the number of conditions for which: Ric = 0 and Rjc > 0; Ric > 0 and Rjc = 0; Ric = 0 and Rjc = 0; and Ric > 0 and Rjc > 0. Then, if ni = 0 and nj = 0, we haveandIfni > 0 ornj > 0, thenm \u226b 1. Therefore, priors 2 and 3 both suppress the artificial correlations from strictly zero-count entities. And, for low-count entities then Bayesian correlation with the third prior is provably small.Note that In machine learning, a kernel is any way of measuring similarity between objects that can be expressed as an inner product in some feature space , 26. It Theorem 2Given any count matrix R and entity- and condition-specific priors \u03b1ic, \u03b2ic > 0, the Bayesian correlation as defined above consititutes a kernel.Although this is true for any choice of priors, in this section we focus on the third notion of prior defined above, and compare the behaviour of Pearson and Bayesian correlations in clustering the Wang data.To further assess how the Bayesian and Pearson correlation schemes compare, we plotted the correlation coefficients between the genes in the Wang dataset (omitting genes with zero counts) and arranged them into square matrices. For easy visualization, we then applied hierarchical row and column clustering to the Bayesian matrix using the Euclidean distance metric and average linkage. The Pearson matrix was then reordered to reflect the same ordering scheme of the clustered Bayesian matrix. The rearranged matrices are shown in A very useful application of the kernel property of the Bayesian correlation function is its use as a similarity measure for clustering. As a demonstration, we carried out hierarchical clustering of the rows of the Wang data matrix (resulting in clustering genes) using one minus the pairwise Bayesian correlation as the distance metric. (Our software takes distance rather than similarity matrices. The third prior was employed). For comparison, in parallel, we also clustered the genes using one minus the Pearson correlation as the distance metric. http://pantherdb.org/citePanther.jsp). Out of a total of 10574 genes (after pre-processing the Wang dataset as described earlier), PANTHER found matches for 10506 genes, thus ignoring 68 genes from the list. We reasoned that genes with strong tissue-specific expression patterns would likely be similar to at least some other tissue-specific genes. Thus, a Bayesian and a Pearson correlation score was assigned to each gene by finding the highest value at which the gene correlates with any other gene in the dataset. Using this score as a predictor and a moving threshold value, we built receiver operatic characteristic (ROC) curves for the two correlation metrics. The ground truths were taken to be the genes that have at least one of the following tissue-specific word stems corresponding to the tissues in the Wang dataset: \u201cadipose\u201d, \u201cbrain\u201d, \u201cbreast\u201d, \u201cneur\u201d, \u201ccereb\u201d, \u201cheart\u201d, \u201cliver\u201d, \u201clymph\u201d, \u201cmusc\u201d, \u201ctestes\u201d, \u201chepat\u201d, \u201ccardi\u201d, \u201carter\u201d, \u201cmamm\u201d, \u201csperm\u201d, \u201cepidid\u201d, \u201cintes\u201d. . From the ROC curves, it is clear that the Bayesian correlation metric serves as a better predictor for tissue-specific co-expression of genes, or in other words, participation of a given gene in a tissue-specific biological process. Specifically, it has superior performance on the highest score genes, where it has eliminated low-expression genes to which the Pearson correlation falls pray.To further assess the biological relevance of the results of the Bayesian and the Pearson correlation methods, we used the Wang dataset and the corresponding correlation coefficients as scores to predict the role of a gene in a tissue-specific biological process. We downloaded the Biological Process (BP) Complete Gene Ontology (GO) terms for all the genes in the dataset from the PANTHER database . Definegic is the difference between our posterior mean estimate of the true read fraction for entity i in condition c, pic, and the average of those estimates across conditions. This term occurs in j, it contributes to the covariance computation. gi is the vector of those terms across all k conditions. Vi is the variance of pic, with respect to our beliefs and across conditions, or in other words, the quantity given by \u03d5i is the vector of deviations of our condition-specific mean estimates of pic from the overall mean estimate, normalized by the standard deviation of those estimates. With this notation, the reader may verify that the Bayesian correlation in i \u2260 j, the Bayesian correlation can be written as an inner product in a suitable feature space.Our proof of the kernel property is divided in two steps. The first step concerns the correlation of two different entities as an inner product in a certain feature space. This concludes the proof that the Bayesian correlation is a kernel.If m entities and k conditions, and we start with an m \u00d7 k matrix of read counts R. The equations in the main text describe how to compute the Pearson and Bayesian correlations between a single pair of entities, A and B. However, in the experiments in this paper, and often in practice, we want to compute all pairwise correlations. We will analyze the complexity of both computations.Pearson and Bayesian correlation computations have the same order of computational complexity. Recall that we have R by its own column sum. This takes O(mk) operations. Even if we are only interested in correlating one pair of entities, the complexity is the same, because of the need of computing the row sums. Then, for each entity, we need to compute its mean read fraction across conditions, taking O(k) per entity or O(mk) for all. Similarly, we need to compute the variance of the empirical read fractions across conditions, taking O(k) per entity or O(mk) for all. For a pair of entities, computing the covariance across conditions takes O(k) time, thus O(m2k) for all pairs. Computing the correlation based on the covariance(s) and the variances takes O(1) time per pair, thus O(m2) for all pairs. All together, the complexity for correlating one pair of entities is O(mk) and the complexity for correlating all pairs of entities is O(m2k).For the Pearson correlation, we first need to compute the empirical read fractions, or equivalently, divide each column of O(mk) to obtain, so that we can compute the posterior \u03b1 and \u03b2 values\u2014whether we are interested in a pair of entities or all entities. Then, computing posterior means and variances takes O(k) time per entity, or O(mk) for all entities. The covariance of the posterior means takes O(k) per entity pair, and thus O(m2k) for all pairs. Finally, the correlation takes O(1) per entity pair or O(m2) for all. Thus, identical to the case for Pearson correlation, we require O(mk) to correlate a single pair or O(m2k) to correlate them all.For Bayesian correlation, the story is similar. We need to know the total read counts per condition, which takes m \u00d7 k matrices of the priors \u03b10 and \u03b20, along with a matrix Rc.s. where each element in column c equals the sum of column c in the read count matrix R. From these, the posteriors are obtained as \u03b1 = \u03b10 + R and \u03b2 = \u03b20 + Rc.s. \u2212 R. The posterior mean matrix is M = \u03b1/(\u03b1+\u03b2), where the division is element-wise. The cross-conditional posterior means are given by the row sums of M, which we then place into an m \u00d7 k matrix Mr.s. with k identical values on each row. Letting Z = M \u2212 Mr.s., the covariances are obtained as C = MMT, where the superscript T denotes the matrix transpose. The variances can be obtain similarly. As an example, computing all pairwise Bayesian correlations for the Wang dataset of over 10,000 genes using our R code correlations for the three datasets studied above. S1 Code(R)Click here for additional data file."} +{"text": "We aimed to determine the association of iris color with lens thickness (LT) in a school-based sample of Chinese teenagers.In total, 2346 grade 7 students, from 10 middle schools, aged 13 to 14 years in Mojiang located in Southwestern China were included in the analysis. A grading system was developed to assess iris color based on standardized slit-lamp photographs. LT was measured by the LenStar LS900. Refractive error was measured after cycloplegia using an autorefractor and ocular biometric parameters, including axial length (AL), were measured using an IOL Master.There was a significant trend of decreasing LTs with darker iris color. On average, eyes with \u201cgrade 1\u201d (the lightest) iris color, when compared with those with \u201cgrade 5\u201d (the darkest), had greater LTs . After adjusting for other potential confounders including sex, height, and ALs in generalized estimating equation models, the trend was similar and did not change significantly. Compared with individuals with iris color of grade 1, those with grade 5 had a thinner lens of 0.1 mm in sex-adjusted model and a 0.09 mm in multivariate-adjusted model.Lighter iris color might be associated with greater LTs in Chinese teenagers. The biological mechanisms underlying the association warrant further clarification.As LT is an important refractive component, knowledge on the effect of iris color on LTs may assist in the design of novel technologies, which could control refractive development. There have been sufficient evidence linking iris color with a series of ocular conditions including age-related cataract,24 age-related macular degeneration,6 and uveal melanoma.7 In our previous work, we found that adolescents with darker iris color tended to have more myopic refractive errors and longer axial lengths (ALs).8 However, the biologic mechanisms underlying these observed associations between iris color and ocular conditions remain unclear.Iris color, one of the most obvious physical characteristics of human beings, fully develops during infancy and does not change significantly in adulthood.9 Crystalline lens is an important component of optical system of the eye. The amounts of UV and visible light pass through the iris and enter the nonpupillary area of the lens are determined by iris color. We hypothesize that the growth of crystalline lens thickness (LT) might be influenced by the amounts of UV and visible lights passing through the iris. Therefore, variations in LTs may be associated with irides with different colors. In this study, we examined the relationship between iris color and the variations in LTs in a school-based sample of Chinese students. The results would have clinical implications regarding the potential impacts of lights on ocular characteristics.Understanding the inter-relationship between iris color and ocular biometric components may help to shed some light on the biologic mechanisms of ocular conditions associated with iris color. There have been limited data assessing the relationship between iris color and ocular biometric components. Irides with different colors are supposed to have different filtering rates for ultraviolet (UV) radiation and visible lights with different wavelengths.11 In brief, the original study cohorts included elementary school grade 1 students and middle school grade 7 students in Mojiang, which is located in Yunnan Province in the Southwestern part of China. The analysis in this paper focused on grade 7 students as data on iris color were only measured in this cohort. A total of 2346 grade 7 students participated in the baseline survey with complete data obtained and the response rate of this cohort was 93.5%. There were no sex differences between participants and nonparticipants (P = 0.25).The data of this study were obtained from the baseline examinations of the Mojiang Myopia Progression Study, which is a school-based cohort study on the prevalence, incidence, and predictors of myopia in school students in rural China. The baseline examinations of this study were conducted in 2016. Detailed study protocols and some other major findings have been reported elsewhere.The Mojiang Myopia Progression Study was performed in accordance with the tenets of the Declaration of Helsinki. The study protocol was approved by the institutional review board of Kunming Medical University. Written informed consent was obtained from at least one parent or legal guardian of each participant.In this study, LT was measured by the LenStar LS900 . We followed the standardized protocols as recommended by the manufacturer. Participants were seated and their heads were stabilized using a chin rest and brow bar when measurements were performed. Study optometrists aligned the instrument using the image of the eye on the computer monitor. Participants were asked to blink just before measurements being taken and fixate on the internal fixation light during the measurements. The instrument automatically detected blinking or loss of fixation and measurements were repeated in this case. Five readings were taken for each eye and the mean of them was used in subsequent analyses.8 In brief, we obtained standardized slit-lamp photographs of anterior segment of the eye and developed a grading system assessing iris color. A panel of reference photographs was selected, which best representing the variations in iris color observed in the sample (shown in a previous report).8 Two graders with reasonable intragraders agreement independently graded the color of all the iris photographs by comparing the specific photograph with the reference panel. The kappa index of the two graders was 0.74 in a pilot test. In addition, one grader repeated the grading of 50 photographs after 2 weeks to assess the intrarater agreement, which was 0.88 in this study. \u201cGrade 1\u201d denoted the lightest color while \u201cgrade 5\u201d denoted the darkest. The higher grade was assigned if a photo was considered to be between two consecutive grades. Inconsistencies between the two graders were solved by the third grader.The detailed grading protocol of iris color has been described in our previous report on the association between iris color and refractive error.12 To measure blood pressure in a controlled environment, participants sat in a chair for 5 minutes of rest, while the right arm was supported at the heart level. Blood pressure was measured with a mercury column sphygmomanometer and then the first and fifth Korotkoff sounds were used to determine the systolic and diastolic blood pressure. Height was measured to the nearest 0.1 cm by a wall-mounted measuring tape. Participants stood straight, barefoot, with relaxed shoulders and their arms hanging freely. Weight was measured to the nearest 0.1 kg by a scale, with minimal clothing and without shoes. Waist circumference was measured to the nearest 0.1 cm by an inelastic measuring tape midway between the lowest rib and the superior border of the iliac crest at the end of a normal exhalation. All the anthropometric indices were measured twice and the mean value was recorded. Body mass index (BMI) was calculated as the weight in kilograms divided by the square of the height in meters.Refractive error was measured after cycloplegia using an autorefractor . Ocular biometric parameters such as AL, anterior chamber depth (ACD), and corneal power (CP) were measured using an IOL Master . Blood pressure was measured according to the protocol as recommended by the National High Blood Pressure Education Program Working Group on children and adolescents.P < 0.10). Subgroup analyses were performed to examine whether the findings were consistent across categories of possible confounders.Data analysis was performed using SPSS version 18.0 . The distribution of LTs was compared across different grades of iris color in univariate analysis. Generalized estimating equation (GEE) models with the right and left eye data combined were fitted to estimate the associations between iris color and LTs. We only adjusted for sex in the first model. In the second model, we additionally adjusted for covariates were significantly different in univariate comparison (P < 0.001). The mean LT for the overall population was 3.48 (standard deviation [SD]: 0.19) mm with no significant sex differences observed (P = 0.57). The distribution of LTs in boys and girls is shown in P for K-S test = 0.02). When stratified by sex, LTs were normally distributed. P > 0.10). LT was negatively related with AL, ACD, and CP . In addition, more myopic refractive error was associated with decreased LT (P < 0.001).The correlation of LTs between the left and right eyes was high . After additionally adjusting for other potential confounders including height and ALs, the trend was similar and did not change significantly. For example, compared with students with iris color of grade 1 (the lightest), those with grade 5 (the darkest) had a thinner lens of 0.1 mm in sex-adjusted model and a 0.09 mm in multivariate adjusted model . ELL-associated factor 2 (EAF2), a component of the ELL-mediated RNA polymerase II elongation factor complex, is a target gene and downstream factor of Wnt-4 signaling. Knockdown of Wnt-4 causes a failure of the development of the eye, whereas EAF2 can rescue the phenotype of loss of Wnt-4 function.17 The results of these studies suggested that EAF2 plays an important role in the development of the lenses. Deficiency of EAF2 causes an inhibition of eye development, particularly the lens development.17 UV radiation can stimulate the expression of EAF2.15 Melanin in the iris blocks the UV radiation through the iris. Therefore, dark pigmentation of the iris may slightly decrease the UV radiation pass through the iris and downregulates the expression of EAF2 slightly, which may results in a slight inhibition of development of lens and the slight decrease of the lens thickness. Third, the factors that modulate the iris color and LT may have some common gene polymorphisms, so that the LT co-relates with the iris color. These three hypotheses on the relationship between the color of the iris and the LT require further validation.There are three different hypothesis on the relationship between the lens thickness and the color of the iris. First, the increase of the lens thickness causes a slight bulge forward of the anterior surface of the lens, which results in an slight bulge forward of the iris that attached on the anterior surface of the lens. The posterior end of the iris (the iris root) is fixed at the anterior surface of the ciliary body, therefore, the move forward of the iris may cause a slight elongation and increase of the iris area. The color of any pigmented tissue (including the iris) is mainly determined by melanin content per area.Some limitations should be acknowledged. First, the subjective grading protocol for iris color was subject to measurement bias. A more objective method for quantifying iris color may help to achieve more precise and reliable measurements. Nevertheless, the intraobserver agreement of the two grades was relatively good and the graders were masked to the subject's clinical characteristics while performing the grading. Thus, we believe the measurement bias might be minimal. Furthermore, our analysis was based on cross-sectional data and causal relationship cannot be determined. It is also likely that thicker lens may results in a lighter iris color, as discussed previously. In the end, Chinese had small variations in iris color, which is light brown to dark brown. Whether the findings observed in this study could be directly extrapolated to other ethnic groups who had a larger variations in iris color, such as the whites remains unclear.In conclusion, our study suggested a possible connection between iris color and LT in Chinese teenagers. The association needs to be confirmed and replicated in other populations and mechanisms underlying the association warrant further clarification."} +{"text": "The purpose of this study was to measure the expression of ghrelin (GHRL) andits receptor growth hormone secretagogue receptor 1A (GHS-R1A) mRNA, and determine cumulus oocytecomplex (COC) viability after IVM with 0, 20, 40 and 60 pM of ghrelin. Also, pronuclear formation wasrecorded after in vitro fertilization (IVF). GHRL and GHS-R1A mRNA expression in oocyte and cumu-lus cells (CCs) was assessed using reverse transcription-polymerase chain reaction (PCR). Oocyte andCC viability were analyzed with the fluorescein diacetate fluorochrome-trypan blue technique. Pronuclearformation was determined 18 hours after IVF with Hoechst 33342. The results demonstrated that ghrelinmRNA is present in oocyte and CCs before and after 24 hours IVM with all treatments. Ghrelin receptor,GHS-R1A, was only detected in oocytes and CCs after 24 hours IVM with 20, 40 and 60 pM of ghrelin.Oocyte viability was not significantly different (P=0.77) among treatments. However, CC viability wassignificantly lower (P=0.04) when COCs were matured with ghrelin . The chance of two pronuclei forming were higher (P=0.03)when ghrelin was not be added to the IVM medium. We found that ghrelin negatively impacts CC viabilityand pronuclear formation.Energy balance is regulated by ghrelin which is a neuroendocrine modulator. Ghrelin is expressed in repro-ductive organs. However, the role of ghrelin during Nutrition has a strong influence on female bovine reproductiveperformance. In recent years, there has been agrowing interest in investigating the relationship betweennutrition and reproduction. In dairy cows, high milk yieldleads to negative energy balance (NEB) which has adverseeffects for fertility , 2. It hGHS-R1AmRNA and protein expression to most reproductive tissuesof dairy cattle (GHRL) and GHSR1AmRNA expression in bovine oocyte and cumulus cells(CCs) after in vitro maturation (IVM) with different ghrelinconcentrations, and evaluate the effect of ghrelin on oocyteand CC viability and pronuclear formation.Previous studies have indicated that ghrelin regulatesseveral reproductive functions , 4. Two y cattle . Howevery cattle -9. There2 in air with saturated humidity for 24 hours. COCs were matured in IVM medium supplemented with 0, 20, 40, and 60 pM acylated ghrelin. The total number of maturated COC was 1152. This total was divided on 200 COC for polymerase chain reaction (PCR) analysis, 480 for viability assay and 472 for pronuclear formation rates after in vitro fertilization (IVF).To perform this experimental research, bovine ovarieswere obtained from an abattoir and transported to thelaboratory in sterile NaCl solution (9 g/L) including theantibiotics streptomycin (100 mg/L) and penicillin (59mg/L) at 37\u00b0C within 3 hours after slaughter. Ovarieswere pooled, regardless of the estrous cycle stage of thedonor. The COCs were aspirated from 3 to 8 mm follicles,using an 18-G needle connected to a sterile syringe.COCs with evenly granulated cytoplasms were selected under a low power (20-30 X) stereomicroscope , and washed twice in TCM-199 buffered with 15 mM HEPES and IVM medium. Groups of 10 COCs were transferred into 50 \u03bcL of IVM medium under mineral oil . Incubation was performed at 39\u00b0C in an atmosphere of 5% CO2, and 1.5 units of Taq DNA polymerase . The cDNA amplification reactions for (GHRL) and GHS-R1A were carried out with an initial denaturing step of 92\u00b0C for 3 minutes, followed by 35 cycles of 30 seconds at 92\u00b0C, 40 seconds at 60\u00b0C, and 40 seconds at 72\u00b0C, with a final elongation step of 72\u00b0C for 5 minutes. PCR products were verified on 2% agarose gel, stained with ethidium bromide, and visualized using a transilluminator with an UV filter. For the negative control, reverse transcription polymerase chain reaction (RT-PCR) procedures were carried out in the same manner, except that M-MLV reverse transcriptase was omitted during reverse transcription. The PCR reactions were performed in duplicates. Primers for each gene of interest were designed using Primer Premier Software containing 1 mg/mL polyvinylpyrrolidone (PVP). Total RNA was isolated from CCs and oocytes with TRIzol according to the manufacturer\u2019s instructions. Samples were then treated with a RNase-Free DNase kit . The RNA content of each sample was calculated through 260 nm absorbance. RNA quality was evaluated by the ratio of absorbance at 260 and 280 nm with a NanoVue spectrophotometer . Complementary DNA (cDNA) was synthesized using a reaction mixture containing 1.5 \u03bcg of total RNA, random hexamers and the M-MLV reverse transcriptase , following the procedure suggested by the manufacturer. Polymerase chain reaction (PCR) was subsecquently performed on the cDNA from oocytes and CCs. The reaction were performed at a final volume of 25 \u03bcL containing 4 \u03bcL cDNA, 0.85 pmol/ mL of each primer, 0.2 mmol/L of each deoxynucleoside triphosphate, PCR buffer 1X and 0.1% Triton X-100, 1.2 mmol/L MgClAt the end of IVM, oocyte and CC viability were evaluated as follows. Oocytes were stripped of surrounding CCs by repeated pipetting in PBS containing 1 mg/mL PVP. Oocytes and CCs were incubated separately in the dark in 2.5 \u03bcg/L fluorescein diacetate fluorochrome and 2.5 g/L trypan blue in PBS medium for 10 minutes at 37\u00b0C. Then, they were washed three times in PBS. The CCs were centrifuge at 200 x g for 5 minutes. The pellet was resuspended in 50 \u03bcL of PBS. Oocytes and CC samples were transfered onto slides, which were immediately covered with cover slips and observed under a fluorescent microscope Olympus BX40 equipped with a 330-490 nm excitation filter and 420-520 nm emission filter. Live cells were visible with green fluorescence, whereas dead ones showed a characteristic blue staining under white light . A total2 in air with a saturated humidity for 18 hours. After IVF, presumptive zygotes were incubated in 0.1% (w/v) hyaluronidase in PBS solution for 5 minutes at 37\u00baC and then oocytes were denuded by gentle pipetting. The presumptive zygotes were incubated in 5 mg/L Hoechst 33342 in PBS for 30 minutes at 37\u00baC. Thereafter, they were examined under a fluorescent Olympus BX40 microscope (with a 365 nm excitation filter and a 400 nm emission filter) at \u00d7200 and \u00d7400 magnification to reveal the presence of pronuclei. A total number of 472 COCs were matured in three replicates for this purpose.The effect of different concentrations of ghrelin in the IVM medium on pronuclear formation was assessed after IVF . ExpandeWe used completely randomized block designs. Statistical models included the fixed effect of treatment (0 vs. 20 vs. 40 vs. 60 pM ghrelin) and the random effects of block . Oocyte and CC viability and rate of pronuclei presence were analyzed with logistic regression using the GENMOD procedure . Data for oocyte and CC viability and rate of pronuclei presence were expressed as a percentage. The level of significance was P\u22640.05.GHRL, RT-PCR showed a band of the expected size (107 bp) in agarose gel electrophoresis for all treatments among COCs treated with 0, 20, 40, or 60 pM of ghrelin during IVM . However, CC viability was significantly lower (P=0.04) in COCs matured with ghrelin than in COCs matured with 0 pM of ghrelin (77.65%). No differences were found between 20 and 40 pM of ghrelin. The lowest CC viability rate was observed with 60 pM of ghrelin (P=0.04).The incidence of polyspermy (>2 pronuclei) and the percentage of mature oocytes penetrated by spermatozoa did not differ among treatments (P=0.96). However, the chance of two pronuclei forming were higher when ghrelin was not added to IVM medium P= 0.03, .GHS-R1A in bovine oocytes and CCs. Our results indicate that ghrelin mRNA expression can be detected in oocytes and CCs both before and after IVM regardless of ghrelin presence during the IVM process. These findings support the idea that ghrelin may have an autocrine and/or paracrine effect within the follicular microenvironment. On the other hand, GHS-R1A mRNA expression was only detected when ghrelin was added to the IVM media, suggesting that the presence of ghrelin in the environment surrounding COCs may stimulate the expression of its functional receptor in both bovine oocytes and CCs. It has been demonstrated that ghrelin increases GHS-R mRNA levels in rat neurons , 13. One embryos . Garc\u00eda embryos demonstrin vitro in the presence of ghrelin. Even though, the information about the effect of ghrelin on oocyte maturation and early embryo development is scarce and contradictory. However, our findings about the negative effect of ghrelin are in agreement with several publications (Cumulus cells play a key role in the acquisition of nuclear and cytoplasmic oocyte maturation . Furtherications , 8, 9."} +{"text": "Noble metal aerogels offer a wide range of catalytic applications due to their high surface area and tunable porosity. Control over monolith shape, pore size, and nanofiber diameter is desired in order to optimize electronic conductivity and mechanical integrity for device applications. However, common aerogel synthesis techniques such as solvent mediated aggregation, linker molecules, sol\u2013gel, hydrothermal, and carbothermal reduction are limited when using noble metal salts. Here, we present the synthesis of palladium aerogels using carboxymethyl cellulose nanofiber (CNF) biotemplates that provide control over aerogel shape, pore size, and conductivity. Biotemplate hydrogels were formed via covalent cross linking using 1-ethyl-3-(3-dimethylaminopropyl) carbodiimide hydrochloride (EDC) with a diamine linker between carboxymethylated cellulose nanofibers. Biotemplate CNF hydrogels were equilibrated in precursor palladium salt solutions, reduced with sodium borohydride, and rinsed with water followed by ethanol dehydration, and supercritical drying to produce freestanding aerogels. Scanning electron microscopy indicated three-dimensional nanowire structures, and X-ray diffractometry confirmed palladium and palladium hydride phases. Gas adsorption, impedance spectroscopy, and cyclic voltammetry were correlated to determine aerogel surface area. These self-supporting CNF-palladium aerogels demonstrate a simple synthesis scheme to control porosity, electrical conductivity, and mechanical robustness for catalytic, sensing, and energy applications. Natural biomaterials display several examples of forming controlled structures with dimensions observed at the submicron and nanometer scales . Recentld-glucose molecules, cellulose functions as an energy storage and structural molecule found primarily in the cell wall of many plants and bacteria 2+ and [PdCl4]2\u2212 were used with the synthesis scheme in 4]2- ion exhibits a brown color correlating to the equilibrated ion concentration, whereas the pale yellow color of [Pd(NH3)4]2+ exhibits a less pronounced color correlation with concentration. Despite the osmotic pressures experienced by the CNF hydrogels solvated in water and exposed to increasing concentrations of palladium salt solutions up to 1000 mM, no gel swelling or de-swelling was observed for gels covalently cross-linked with EDC. In the absence of EDC mediated covalent cross-linking, physically entangled CNF hydrogels swelled and disaggregated in the presence of salt solutions, indicating increased gel stability with covalent cross-linking. 3)4Cl2 b and Na2 4]2+ solutions.Hydrogels equilibrated in palladium solution were then reduced in 2.0 M NaBH3)4Cl2. For aerogels prepared with 1 mM palladium solutions 4]2+, electrostatically bound to deprotonated carboxyl (COO\u2212) groups reduced to form initial nanoparticles along the surface of the cellulose nanofibers, allowing for fusion of nanoparticles formed within the hydrogel pores through surface free energy minimization. In the absence of crosslinking, ionic gels formed by centrifuging CNF solutions in the presence of palladium salt solutions, and reduced with NaBH4 resulted in nanofoams that did not maintain their macroscopic shape 4Cl2. For the lower synthesis concentrations and consequent lower metal to organic mass ratio, the XRD signal-to-noise ratio is low, but increases for aerogels prepared with 500 mM and 1000 mM palladium salt solutions. Spectra peak positions for all synthesis concentrations are broad indicating small crystallite sizes, and did not index to a single phase of palladium. For aerogels prepared with palladium solutions 100 mM and below, the presence of PdH0.706 palladium hydride peaks, indexed to Joint Committee on Powder Diffraction Standards (JCPDS) reference number 00-018-0951, were distinctly seen at 38.7\u00b0, 45.0\u00b0, 65.6\u00b0, and 78.9\u00b0 for the (111), (200), (220), and (311) Miller indices, respectively. At room temperature, palladium hydride may exist in an \u03b1 and \u03b2 phase, where the hydrogen to palladium ratio for \u03b1 phase is 0.03, and approximately 0.6 for the \u03b2 phase [4 hydrogen gas is entrained within the palladium nanoparticles [4 reducing agent, hydrogen gas evolution within the CNF hydrogel pores is thought to generate sufficient gas pressure to drive the formation of the palladium hydride phase. The palladium phase in the aerogels was indexed to JCPDS reference 01-087-0643. Like palladium hydride, the palladium phase has a cubic crystal system, and a Fm-3m space group. As the palladium synthesis concentrations increase from 1 mM to 1000 mM, the distinct palladium hydride peaks become convoluted with the palladium phase peaks, such that they are not distinguishable at 1000 mM. Peak broadening decreases as the palladium synthesis concentrations increase corresponding to the average fiber diameters determined from SEM images in 2PdCl4 (n 2PdCl4 .3)4Cl2, thermogravimetric analysis (TGA) was performed with results shown in 2PdCl4 4Cl2. The BET specific surface area of the 0 mM, 100 mM, and 1000 mM samples was 582, 456, and 171 m2/g, respectively, indicating that the specific surface area decreases in aerogels equilibrated with increasing concentration of palladium salt. The physisorption data shown in 0 = 0.995, the maximum volume adsorbed for the 0 mM, 100 mM, and 1000 mM samples was 4512, 3653, and 1372 cm3/g, respectively. H3 type hysteresis is observed in all samples, characteristic of capillary condensation in the mesopores. The hysteresis closes at higher relative pressures for increasing Pd concentrations as compared to the 0 mM sample, indicating that the smaller mesopores (< 30 nm) are eliminated by the increasing addition of the Pd phase; this result is consistent with the Barrett\u2013Joyner\u2013Halenda (BJH) pore size analysis, which shows a decreasing frequency of mesopores with the increasing addition of Pd [3/g, respectively. For CNF hydrogels with fixed pore sizes via covalent crosslinking, the reduction of higher Pd ion concentration results in increased metal content within the pores, and consequently the decreased cumulative pore volume observed with BJH analysis.Nitrogen gas adsorption isotherms were generated for CNF-Pd aerogels prepared with 0 mM, 100 mM, and 1000 mM Pd:f as the frequency, Z\" as the impedance imaginary component, and m as the sample palladium mass. A transmission line equivalent circuit model developed for porous noble metal aerogels was fit to the Nyquist plot in The electrochemical analysis using CV and EIS techniques is shown in 2SO4 electrolyte shown in The CV scans at 10, 25, 50, and 75 mV/s in 0.5 M HWe have shown here that covalently cross-linked cellulose nanofiber hydrogels serve as a robust biotemplate to synthesize porous, high surface area, electrochemically active CNF-palladium composite aerogels. The CNF hydrogel biotemplate maintains its monolithic shape during all synthesis steps, and the carboxymethyl groups provide electrostatic binding sites for palladium cations and subsequent reduction of nanoparticles. This work demonstrates the ability to control the palladium metal content within a CNF hydrogel, and consequently control the performance characteristics of the material. CNF-palladium composite aerogels demonstrate a synthesis route to potentially use a variety of biological template hydrogels with pH tunable surface charge with other noble and transition metals compatible with aqueous solution reduction chemistries for a wide range of energy storage, catalysis, and sensing applications."} +{"text": "Introduction: Alcoholic beverages have a proven impact on neuronal development and other areas of the body, primarily the heart, kidneys and liver, which is why their consumption in children is prohibited. However, there are traditional drinks that have alcohol content (Chicha de Jora-Clarito); artisanal drinks of traditional origin with alcoholic content in Peru. The aim of this study was to characterize the consumption of traditional alcoholic beverages in children of a rural village in Northern Peru.Methods: This study was an analytical cross-sectional study. Mothers were recruited by census sampling and reported the consumption by their children of two traditional drinks with alcoholic content: Chicha de Jora (Ch) and Clarito (Cl), which are derived from the fermentation of maize. The frequency of consumption, accessibility and perception of consumption risk were described.Results: Data were collected about 300 children, 61% (183) of whom consumed Ch. and 31% (92) of whom consumed Ch and Cl. Regarding drink accessibility, the majority of mothers said that these drinks were cheap (Ch: 69.0% and Cl: 60.7%). Additionally, the vast majority of families sometimes consumed or always consumed such beverages (Ch: 81.3% and CI: 65.7%). One in three mothers perceived Ch and Cl as being nutritious and helping their children grow. 25% of mothers perceived that there was no risk to their children from the consumption of the beverages, whereas >60% said that there could be a risk due to the beverages\u2019 alcohol content.Conclusions: Our study found that traditional beverages containing alcohol are consumed frequently by children in a village in Northern Peru. Mothers provide accessibility to the beverages and perceive the risk the drinks have, which will more accurately evaluate this risk. We advise that future studies concerning the intervention of these attitudes are performed, for a better future and development of children. The World Health Organization states that 3.3 million deaths are caused every year worldwide by the harmful use of alcohol2. It is well known that these types of drinks cause a series of physiological problems 4, as well as behavioral problems, which include maladaptation to the family and social environment, and, in extreme situations, could lead to suicide5.Alcoholic beverages, which are traditionally derived from the fermentation of sugars and yeast6. However, some countries, such as Colombia and Argentina, have reported onset at an earlier age7. In Peru, there is almost no information on this subject ; however reports show that the median age when alcohol consumption begins is 13 years, while in locations where children have greater access to alcoholic beverages, consumption starts at 10 years8.According to worldwide data, alcohol use has 5.1% comorbidity in the age group between 20\u201339 years9; thus reaching about 28.7% of unregistered alcohol registered by the World Health Organization (WHO)10, since consumers despite having between 10 to 12 degrees of alcohol, they consider this as a traditional drink10.Chicha de Jora (Ch) and Clarito (Cl) are drinks derived from the fermentation of maize that have been consumed since Pre-Hispanic times throughout the northern coast of Peru. The Incas, among the types of corn that they cultivated considered the germinated corn (Jora) as a sacred drink; giving two derivatives of alcoholic content (Chicha de Jora and Clarito). This tradition has been passed from generation to generation until today. Currently, the elaboration of this millenary drink in Peru is done by hand, not having a formal regulation by the industry11. These factors can create a problem if such drinks are consumed by children and teenagers. The objective of this study was to characterize the consumption of these traditional alcoholic beverages in children of a rural village in Northern Peru.Consumption is high due to their low production cost, ease of access, and traditionA cross-analytical cross-sectional study was carried out between February and May 2017, in which the mothers and/or guardians of the Northern Peruvian settlement of \"La Piedra\", where 308 children under the age of 15 reside, were surveyed. Household visits were completed for the purposes of the study. Thanks to the information provided by the governor, the surveys were carried out in each of the homes of the mothers and/or guardians using census sampling. A sample size was calculated for a descriptive study, for the local population of children, with a statistical power of 99%, a 95% confidence level and a maximum prevalence of 50%. A minimum sample of 300 children was obtained; this was captured non-randomly.All mothers residing in the populated center during the interview were included. Mothers who did not wish to participate in the study, as well as those mothers who responded inadequately to our survey were excluded. After reading through the informed consent and agreeing to participate the mothers were enrolled in the study. Those who did not respond adequately to the survey (unanswered questions and/or incomplete answers) were excluded. Rate of rejection = 2.5%, thus achieving a total of 300 surveys applied, obtained from the interview of 103 mothers or guardians (in some cases the mothers or guardians had more than one child).For the present study, a survey was carried out, which was previously validated by a pilot study in a sample of 50 individuals, where a Cronbach's alpha of 0.781 was obtained. The previous pilot study was not published, the results were only for the evaluation of the survey. The survey had minor modifications after the pilot study. These were used to specify the details of consumption, access and even the consequences of the consumption of alcoholic beverages. The final survey had two main sections :Socio-demographic data: Basic data was provided, such as the child\u2019s age, weight, height and school grade, and in addition the number of household members and household income.Characteristics of drinking habits in liquids/beverages: These characteristics were evaluated through closed questions, in which inquiries were made about the daily consumption of different drinks, primarily the consumption of beverages containing alcohol (Ch and Cl). The following information was obtained: the frequency of consumption , the accessibility of beverages , whether or not consumed by the person who responded to the survey and by the whole family, and if the consumption of the beverages was perceived as harmful or nutritional for the child's health. Finally, other exploratory variables were captured, such as the consumption of other types of beverages ; and a section where the child's socio-academic problems were assessed was included. These exploratory variables are not discussed in the present study.All surveys were anonymous and were conducted by a researcher belonging to the study. The approximate duration of the survey was 20 minutes. At all times the assigned researcher was properly trained to be able to solve doubts about any of the questions.For the data analysis, a double digitizing system was performed, for a better control of the data collected. Surveys were entered in the Microsoft Excel program (version 2015), then proceeded to make a first filter for checking the data. Following this, the data were processed in Stata 11.1 .For descriptive statistics, we worked with frequencies/percentages for categorical variables, and medians and interquartile ranges for the quantitative variables. The chi-square statistical test was applied for the association of the consumption of the drinks versus the perception that the consumption of the drinks could be bad for children. P<0.05 was considered statistically significant.Permission and support was provided by local authorities . Since children were the target of this study, all precautions were taken to ensure anonymity and respect for ethical precepts. The study was approved by the Ethics Committee of the San Bartolom\u00e9 National Hospital, endorsed by the National Health Institute . This committee was chosen since there is no committee that monitors the approval of the NIH where the study was conducted. This committee also approved the pilot study. The ethical standards on human experimentation of the Declaration of Helsinki of 1975 were taken into account. The results will be given to the sanitary authorities of the region, so that they can learn about this reality and put forward strategies of help. The study was carried out under the permission of the mothers/guardians, who gave written informed consent.Data were collected about 300 children, 51.3% (154) were girls, and the median age was 9 years (interquartile range: 5\u201312 years). 15.8% (41) studied at an initial level, 53.5% (139) studied in a primary school and 30.7% (80) studied in secondary school. 61.0% (183) and 30.7% (92) consumed Ch and Cl, respectively .Most of the mothers reported that they consumed Ch (84.7%) and Cl (62.7%) when they were children, and the majority also consume the drinks now (Ch: 74.0% and Cl: 47.7%). Regarding accessibility of the beverages, the majority of mothers said that these drinks were cheap (Ch: 69.0% and Cl: 60.7%), and the vast majority of families sometimes consumed or always consumed such beverages (Ch: 81.3% and Cl: 65.7%) .35% of mothers perceived that Ch is nutritious and helps growth, while 33% and 35% of mothers perceived that Cl is nutritious and helps growth, respectively . 25% of Raw data from the responses of mothers/guardians concerning their children\u2019s consumption of traditional alcoholic beverages (n=300 children)Click here for additional data file.Copyright: \u00a9 2018 Ram\u00edrez-Ubillus JM et al.2018Data associated with the article are available under the terms of the Creative Commons Zero \"No rights reserved\" data waiver (CC0 1.0 Public domain dedication).12; in the Province of Buenos Aires, 55.4% of adolescents between the ages of 11 and 14 consume alcohol13; while a study in Colombia, with children at the mean age of 14.4 years, concluded that the pattern of alcohol abuse measured by the CAGE scale was 14.6%14.The consumption of alcohol in children is still a very important problem, as evidenced in this study, where out of 300 children surveyed, 183 and 92 children consumed Chicha de Jora (Ch) and Clarito (Cl), respectively, every week. These results of consumption are greater than in different studies from different countries. For example, in Brazil, only 12.8% of children consumed any type of alcoholic beverage before age 1015. Another report in Angola showed that 56% of mothers of 319 children had regular alcohol habits. Our study showed that this percentage was higher at 84.7% of mothers who consume Ch and 62.7% who consume Cl16. Also in Brazil, Argentina, Colombia, Chile, and Mexico, it was reported that occasional consumption of alcohol is associated with family context, influence of friends, antisocial behavior, and skills and experiences already acquired in childhood, which could be circumstances that encourage the consumption of alcohol in children18.The consumption of these traditional beverages also occurred during the mothers' childhood, with a majority stating that they had consumed both drinks. Many of the mothers expressed that they still consume them. A report of a population study in Chile, of 408 alcoholic respondents, reported that 27.2% lived with children in the house and in 46.3% of cases the drinker was either the father or the mother18. In our population, the acquisition of Ch (69.0%) and Cl (60.7%) was considered economical-average cost: 1 to 5 Soles / Bottle (0.80 Euros)- because of their low cost of production; therefore making them more accessible and frequently consumed. One in every three mothers perceived that the Ch and Cl are nutritious and help the growth of their children, and this is a perception that could lead them to giving these drinks to their children. A study from Spain reported that fathers and mothers do not consider their children's alcohol consumption to be a problem19, thus increasing their early intake without restriction.The consumption of alcohol in younger populations has risen in recent years in Peru, which has the potential to cause harm and create addictive behavior20. We can infer that this is mainly due to a socio-cultural characteristic where the community view the consumption of these traditional alcoholic beverages as normal.In the present study, most mothers knew about the risk of alcohol consumption by children. However, it was observed that the consumption in most of their children remained high. Studies carried out in Spain and Cuba indicate that the family can be a protection, but also a risk factor. In both cases, the maternal figure tends to have a positive influence on the child, which differs from what was found in the present studyThe study had the limitation of selection bias, since it was completed in a sample that does not represent the total population of Peru. Likewise, since this is a preliminary study, its non-quantitative nature also counts as a limitation. However, this study used census-type sampling in a population that had not been previously reported; therefore, these results can be taken as preliminary. In particular, these findings can be used to alert the responsible authorities, so that detection and support measures can be implemented, so that families in this village and similar locations with similar consumption conditions can receive the necessary support.According to the present study, it is concluded that children consume traditional alcoholic beverages and that their mothers provide access. Although mothers perceive the risk that these drinks have, they still give them to their children. Finally, there could be a danger to health, however, further studies would be necessary in a quantitative manner, which would more accurately assess this risk.The data referenced by this article are under copyright with the following copyright statement: Copyright: \u00a9 2018 Ram\u00edrez-Ubillus JM et al.Data associated with the article are available under the terms of the Creative Commons Zero \"No rights reserved\" data waiver (CC0 1.0 Public domain dedication).Dataset 1: Raw data from the responses of mothers/guardians concerning their children\u2019s consumption of traditional alcoholic beverages (n=300 children). doi,10.5256/f1000research.12039.d17015821 Please think of introducing further keywords to your article to make it more accessible/easier to find for an international readership. I would advise on including the key words \u201cunrecorded alcohol\u201d and \u201calcohol drinking\u201d, for instance.consider these drinks to be cheap. So I would suggest to rephrase the abstract a bit: \u201cthe majority of mothers said that these drinks were cheap (69% considered Ch. to be cheao and 60.7% considere Cl.to be cheap). The explanation of what was meant with cheap should appear already in the introduction or the methods section and not in the discussion section.In the abstract please clarify what exactly you mean with \u201cthe majority of mothers said that these drinks were cheap (Ch: 69.0% and Cl. 60.7%).\u201d It took me a while to understand what exactly you mean with the numbers and I finally got it with the help of Table 2, which indicates that these percentages relate to the question whether the mothers/legal guardiansI fear that I do not fully understand what you mean in the discussion when saying \u201cIn our population, the acquisition of Ch (69.0%) and Cl (60.7%) was considered economical-average cost: 1 to 5 Soles / Bottle (0.80 Euros)- because of their low cost of production; therefore making them more accessible and frequently consumed.\u201d -->\u00a0so what do you say here and how does this, again, relate to the perception of these beverages as being cheap?Please clarify whether there is a difference between the two traditional drinks Ch and Cl in terms of alcohol content or any other property that might be of interest/importance. For now the manuscript\u00a0states that both beverages are derived from fermentation of maize, have an alcohol content of about 10-12%\u00a0 and that Cl is slightly cheaper (?). Is there anything else the readership should know? Please add some ideas to the discussion section why you think the consumption of Cl was less frequently reported than consumption of Ch and why Cl. was perceived as less nutritious. The international readership is not familiar with these two beverages and it would be good to get an idea of their differences, after having read the text.It would be good to have some clarity on certain terms. -\u00a0For instance, please use the term \u201cunrecorded alcohol\u201d instead of \u201cunregistered\u201d, especially when referring to the WHO definition. -\u00a0Also please use the term \u201caffordability\u201d, when describing and discussion the prices of the beverages. Accessibility, at least in my view, rather relates to the question of physical access of the children to alcohol, specifically the questions whether mothers/guardians were given the alcoholic beverages to the children or not. So please separate the two terms for better clarity. -\u00a0Please be very clear that you always refer to mothers AND guardians in the manuscript, unless you talk about consumption of the different beverages in the family, which is a different indicator according to your questionnaire. You might want to introduce a footnote or an abbreviation at the very beginning. For now, the terms \u201cmothers\u201d and \u201cmothers and guardians\u201d are used inconsistently throughout the text.Please double-check what was already raised by another reviewer: --> Table 3: check values for line \u201cdangerous\u201d. Is this logical that both are \u201c0\u201d?\u201d The table is a bit confusing the way it is, maybe you could rearrange for better readability.the report1.When citing the Global Status Report on Alcohol and Health of the WHO, please refer to the newest one from September 2018 and please also update the numbers you cite fromAlso, I do not quite understand why you cite some indicators from the GSRAH on comorbidities in the age group of 20-39 years old. How does this relate to the subject of your study? You look at a much much younger cohort, so citing evidence from the same age group would be more appropriate here. Moreover, I do not understand why you cite comorbidities here. The way I see it, the GSRAH provides much more important indicators: for instance the percentage of total deaths attributable to alcohol, by age group for the WHO region of the Americas . If there\u2019s something that should be cited from the GSRAH, then definitely this, highlighting how alcohol is impacting on mortality relatively early in life. Also, the report provides a country profile for Peru, which could be also helpful for the authors, e.g. for the estimate of the share of unrecorded alcohol consumed in Peru .rd sentence). The way it is put, it sounds very ambiguous, allowing for the conclusion that traditional alcoholic drinks cause problems, which would not have occurred if these were industrial alcoholic beverages. The authors should be very clear on where they stand here. For now, the evidence shows that the most harmful ingredient of unrecorded alcohol is ethanol and that traditional/homemade alcoholic beverages are generally of the same quality as industrially manufactured alcoholic beverages and that their harm stems rather from the cheap price and broader availability and according drinking patterns rather than chemical properties.Please be clear with what you mean \u201cIt is well known that these types of drinks cause a series of physiological problems\u201d in the introduction, please refer either in the introduction or the discussion to some literature on the early age at onset of alcohol use and subsequent risk of developing an alcohol use disorder. I\u2019m not an expert in the field, but these are the studies that come to my mind immediately:2 -\u00a0Hingson, R. W., Heeren, T., & Winter, M. R. (2006)3 -\u00a0Grant, B. F., & Dawson, D. A. (1997)The following sentence of the introduction was very confusing to me: \u201cCurrently, the elaboration of this millenary drink in Peru is done by hand, not having a formal regulation by the industry; thus reaching about 28.7% of unregistered alcohol registered by the World Health Organization (WHO), since consumers despite having between 10 to 12 degrees of alcohol, they consider this as a traditional drink).\u201d What is it the authors intend to say here? I do not quite understand how the different statements relate to each other. Just because something is a traditional alcoholic drink, it does not mean that it automatically belongs to the category unrecorded alcohol . There are plentiful examples where the industry is producing alcoholic beverages that were/are traditionally homemade/handmade. Also please think twice about your statement of \u201cformal regulation by the industry\u201d. Is industry really the ones, who should be regulating this? What about the role of the government and public health in this? I would advise in being very careful in the way you argue here, as for now, the sentence suggests that the traditional beverages are somehow worse than industrially produced beverages or, at least, one can read those sentences like this. Also, please update the share of unrecorded alcohol in Peru. According to the newest WHO report, it is 19.1% for the latest available period (2016).Please provide a bit more context and references to the claim: \u201cConsumption is high due to their low production cost, ease of access, and tradition.\u201d Unfortunately, I could not access the according reference for this sentence (11) to see to what source of information you refer here. Also, do you mean by \u201cease of access\u201d physical availability of Ch. and Cl. and/or other factors, for instance the possibility to purchase these products when the local shops are closed at certain days/times? Further, you argue that \u201cThese factors can create a problem if such drinks are consumed by children and teenagers.\u201d Please be more specific with what exactly you mean here. There are certain reasons why these drinks are consumed by children. Besides cheap price and higher availability (is it?) and the belief that these drinks are beneficial for the children\u2019s growth, the legal minimum age for purchasing alcohol seems to another very important factor, which should be mentioned here explicitly.The methods section should mention that this study was basically a household study and the limitations section should mention all the biases that naturally comes with assessing alcohol consumption in household surveys in general: selection bias , recall bias , social desirability bias and potential stigma of consumption (should be at least discussed whether this applies here or not) and so forth\u2026 Also, the methods and limitations section should explicitly mention and discuss the fact that children were actually not asked directly about their consumption and that the survey was done with mothers/legal guardians as proxies. It also should be mentioned/described how the interview situation exactly looked like and whether the children were present during the interview or not and how this could have influenced the results.The methods section states that \u201call mothers residing in the populated center during the interview were included\u201d, indicating that the sample was a census. What exactly does this sentence mean, however? Does it mean that mothers of the entire town were interviewed? Or of the city center only? I\u2019m a bit confused by the choice of words here. If only mothers/legal guardians from the city center were included, please discuss what implications this might have in terms of representatively of the results.Physiological indicators (weight and height of the children) were included, but not reported in the results section. Could the authors please kindly comment on that and decide whether they want to perform the analysis for these indicators or not. If these indicators are included, please report on whether these were collected through objective measuring or self-report by the mothers/guardians. If the latter is the case, please discuss the biases coming with this type of assessment.Was there any kind of additional information available on the indicator/questions \u201cCould it [the consumption of a certain drink] be bad for you?\u201d? Does \u201cthe bad for you\u201d refer to the mothers/guardian OR the child? If it refers to the mother/guardian, was there any kind of question that was asked for the child? I think it would be quite important to know how this \u201cbad for you\u201d is conceptualized by the authors and perceived by the interview partners. In table 3, the authors talk about \u201cthe perception of risk\u201d of the mothers/guardians and then the binary variable of \u201cdangerous\u201d/\u201cnor dangerous\u201d. I wonder if it is justified to present this as risk perception and the outcomes as dangerous/not dangerous, if the original question asked very vaguely if consumption \u201cis bad or not\u201d. So far, I didn\u2019t get the grasp of what the mothers/guardians were actually thinking about this concept. For instance, based on the information provided in table 3, the mothers/legal guardians seem to be aware of the fact that these drinks contain xy% of alcohol and that alcohol poses certain risks for the child\u2019s development. However, my impression is that there is a certain discrepancy between the questionnaire and the ways the results are presented in terms of risk perception (e.g. that \u201cbad for you\u201d is something else than \u201cdangerous\u201d). Please report if there was already some clarity on that from the pilot, how the \u201cbad for you\u201d is related to risk perception. Maybe you even did some qualitative interviews in the pilot to understand how the risks are perceived by the mothers/legal guardians?I think Figure 1 could be extended and include also the information of how many mothers/guardians thought that consumption of Ch. and Cl. is bad for them .Please make sure that the titles of the tables/figures are consistent, e.g. the tables feature additional information on the sample size and study location and the figures do not.Would be helpful, if Table 1 would feature some more information/broader definitions on \u201creason for danger\u201d- e.g. what exactly does it mean \u201cDoes not heal knowledge\u201d?Figure 3 should report p values according to the F1000 article guidelines.The second sentence in the discussion section states that \u201cThese results of consumption are greater than in different studies from different countries. For example,\u2026\u201d. Please make absolutely sure that you are comparing your results with the results of similar studies from other countries, i.e. with studies that have specifically measures consumption of unrecorded and traditional alcoholic beverages in children. I do not think that it is appropriate to compare your study with studies that were measuring consumption of recorded alcoholic beverages in children/youth. Basically, your results can be interpreted in a way that \u00bc of the mothers/legal guardians did not perceive Ch. and Cl. as alcoholic beverages and therefore \u201cbad for children\u201d. So if asked if their children consume alcoholic products, they would have said \u201cno\u201d, although the opposite was the case. This simply means that you cannot compare your specific results to other studies if they were measuring something else and state that the children of your sample had a higher prevalence of alcohol consumption. However, this does not mean that you should omit comparisons with other studies from the Americas completely; it just means that one should compare apples with apples.Please comment on the first reviewer\u2019s remark: \u201cDiscussion, last paragraph: please include the non-quantitative nature as limitation. Considering your comment that \u201cthe consumption of these traditional alcoholic beverages\u201d is seen as \u201cnormal\u201d, is there at least some information how much of the beverages is consumed by the children?\u201dI would disagree with the claim that \u201csince this is a preliminary study, its nonquantitative nature also counts as a limitation.\u201d This IS a quantitative study, using a quantitate survey as instrument and featuring descriptive statistics. It does not provide risk-assessment and, as outlined by the first reviewer, but saying that this as a not quantitative study is also not justified. So I would advise on naming the lack of data on quantitative intake as the actual limitation/feature of the preliminary nature of the study.I am also a bit confused by the sentence \u201cHowever, this study used census-type sampling in a population that had not been previously reported; therefore, these results can be taken as preliminary.\u201d I do not see the relation between census type sampling and the preliminary nature of the results. Of course, the study does not represent the entire population of Peru, but it fully represents the population of children living in household in this specific village. Therefore, it is representative of this specific village and might be representative of further villages of Norther Peru with similar characteristics. I do not see why these results should preliminary if talking only about this context.Could you maybe be a bit more specific when saying \u201cThese findings can be used to alert the responsible authorities\u201d. If I understand your finding correctly, the lack of awareness about the risks of alcohol for young people is one of the biggest issues in these communities. So I think it would be good to name this and some other issues specifically, as well as to write just one or two sentences what the authors think could be done/what kind of action is needed. I think that this could go onto the conclusion, where it should be stressed that a certain risk was identified in this study and that now, further research is needed to explore the dimension of this risk, as well as certain measures to tackle it.The study provides very interesting and valuable insights into a topic, which is still very under-researched. However, I would recommend including some changes to improve clarity and flow of the manuscript and also to help settle the conclusion, as for now, it is not entirely clear where exactly the \u201chealth danger\u201d is coming from and what exactly is problematic about the observed behaviours.\u00a0I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard, however I have significant reservations, as outlined above. The authors have adequately considered all my previous comments. The paper could be further\u00a0improved by an English language correction as some sentences are difficult to understand. For example: \"However, there are traditional drinks that have alcohol content (Chicha de Jora-Clarito); artisanal drinks of traditional origin with alcoholic content in Peru.\" \"Mothers provide accessibility to the beverages and perceive the risk the drinks have, which will more accurately evaluate this risk.\" Some further minor corrections: \"thus reaching about 28.7% of unregistered alcohol\": change unregistered to unrecorded (see WHO wording) \"since consumers despite having between 10 to 12 degrees of alcohol, they consider this as a traditional drink\". Change \"degrees of alcohol\" to \"% volume\". It is also illogical why the content of alcohol would exclude the status as \"traditional drink\".I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard. The article is interesting, but the presented analysis only is descriptive, losing the relevance and applicability. I suggest that Authors enlarge the analysis, which should evaluate the potential risk factors. The conclusion is adequate, but only they could describe the consumption of traditional alcoholic beverages. The introduction is not enough to explain the problem compared to the use of alcoholic beverages. In the methods, authors should clarify the patterns of consumption, the frequencies of consumption per week or monthly. The data analysis only centered in the descriptive the mother\u00b4s consumption (perceptions), accessibility and perception of risk, but they did not analyze the association or relation with child\u00b4s age, weight, height and school grade, household income and number members.I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard, however I have significant reservations, as outlined above. Dear reviewer, the requested changes have been made in the comments, in the same way, we communicate that this is a preliminary study and we take into account in a future study to consider data in associations, as well as associated factors. On behalf of the Authors, very grateful The authors provide a pilot study into the consumption of traditional alcoholic beverages in rural Peru. The article is interesting and novel as there is clearly a lack of data on unrecorded alcohol from South America. It is also quite disturbing to read that considerable alcohol exposure may occur in children.1 for definition).For the international reader the beverages Chicha de Jora and Clarito are almost unknown. Can some information about these beverages be provided as background? For example, are they similar to maize beers? What is the typical alcoholic strength of these beverages? Are they commercially and legally sold by some kind of artisanal small-scale industry, or are they illegally sold? Should they be considered as falling into the WHO category of \u201cunrecorded alcohol\u201d . With the currently available data I would suggest to conclude that there may be a health hazard, but quantitative intake assessment as well as chemical characterization of the beverages would be necessary for risk assessment.The conclusion states that the products are providing \u201cgreat danger\u201d to health. While this may be true, there is not much in the data that would allow for such a conclusion. The study appears to be non-quantitative in nature and the alcoholic strength of the product appears to be unknown. Hence no calculation of daily alcohol exposure can be made, which would allow for a quantitative risk assessment th line: \u201clow cost of production\u201d. Please provide some comparison for the low cost. Are the alcoholic beverages cheaper than non-alcoholic alternatives such as milk or fruit juices?Page 6, 4st paragraph, last line \u201cno studies about consumption of alcohol and drugs by children\u201d. I wonder about this request and why this is seen as unfortunate. I would find it highly unethical to study alcohol and drug consumption in children, and I can predict that we will never see such a study. This sentence should be deleted.Page 6, 1Discussion, last paragraph: please include the non-quantitative nature as limitation. Considering your comment that \u201cthe consumption of these traditional alcoholic beverages\u201d is seen as \u201cnormal\u201d, is there at least some information how much of the beverages is consumed by the children?Conclusions: \u201cit is concluded that children consume a large quantity of traditional alcoholic beverages\u201d. This conclusion appears to be not founded in the data. No quantitative measurements were conducted. The following revisions should be considered:I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard, however I have significant reservations, as outlined above. Dear reviewer, the changes requested in the comments have been made. On behalf of the Authors, very grateful"} +{"text": "Eucalyptus globulus trees treated with oligo-carrageenan (OC) kappa showed an increase in NADPH, ascorbate and glutathione levels and activation of the thioredoxin reductase (TRR)/thioredoxin (TRX) system which enhance photosynthesis, basal metabolism and growth. In order to analyze whether the reducing redox status and the activation of thioredoxin reductase (TRR)/thioredoxin (TRX) increased the level of growth-promoting hormones, trees were treated with water (control), with OC kappa, or with inhibitors of ascorbate synthesis, lycorine, glutathione synthesis, buthionine sulfoximine (BSO), NADPH synthesis, CHS-828, and thioredoxin reductase activity, auranofine, and with OC kappa, and cultivated for four additional months. Eucalyptus trees treated with OC kappa showed an increase in the levels of the auxin indole 3-acetic acid (IAA), gibberellin A3 (GA3) and the cytokinin trans-zeatin (t-Z) as well as a decrease in the level of the brassinosteroid epi-brassinolide (EB). In addition, treatment with lycorine, BSO, CHS-828 and auranofine inhibited the increase in IAA, GA3 and t-Z as well as the decrease in EB levels. Thus, the reducing redox status and the activation of TRR/TRX system induced by OC kappa increased the levels of IAA, GA3 and t-Z levels determining, at least in part, the stimulation of growth in Eucalyptus trees. Tubstrate ,5. Gibbeinto t-Z . Cytokinygenases ,8. EB reygenases ,2,3,4.Arabidopsis [t-Z regulates auxin metabolism in Arabidopsis [3 regulates t-Z levels in tomato [The effects of growth-promoting hormones are mainly determined by the ratio among particular hormones showing positive or negative interactions. It has been shown that IAA and gibberellins display a reciprocal positive interaction since IAA regulate gibberellin metabolism , and gibbidopsis . In addibidopsis since t-bidopsis , and GA3n tomato . In addin tomato .Arabidopsis thaliana impaired in GSH synthesis and in cytosolic thioredoxin reductase (TRR) activities, showed normal development until the rosette stage, but failed to generate lateral organs from the inflorescence meristem producing almost naked stems as well as loss of apical dominance, vascular defects and reduced formation of lateral roots [In plants, redox status is determined by the level of reducing compounds, mainly NADPH, NADH, ascorbate (ASC) and glutathione (GSH) . Regardial roots . The latal roots . The defal roots . Thus, aEucalyptus globulus trees with OC kappa induced a reducing redox status due to the increase in NADPH, ASC and GSH synthesis and the increase in NADPH activates TRR/TRX system which, in turn, activates photosynthesis, C, N and S assimilation, basal metabolism and growth [Oligo-carrageenan (OC) kappa was obtained by acid hydrolysis of pure kappa carrageenan and is constituted by around 20 galactose units sulphated at C4 linked to an anhydrogalactose . In prevd growth . In addid growth .t-Z and the brassinosteroid EB, explaining, at least in part, the increase in growth induced by OC kappa in Eucalyptus trees. To this end, Eucalyptus trees were sprayed on leaves with water (control), with OC kappa at 1 mg\u2219mL\u22121 or with lycorine, an inhibitor of ASC synthesis, buthionine sulfoximine (BSO), an inhibitor of GSH synthesis, CHS-828, an inhibitor of NAD(P)H synthesis, and auranofine, an inhibitor of TRR activity, and with OC kappa at 1 mg\u2219mL\u22121. Under these treatments, trees were grown for four additional months and the levels of IAA, GA1, GA3, GA4, t-Z and EB were determined.In this work, we analyzed whether the reducing redox status and the activation of TRR/TRX system determine an increase in the level of growth-promoting hormones IAA, gibberellins A1, A3 and A4, the cytokinin Eucalyptus trees was 0.3 nmoles\u2219g\u22121 of fresh tissue (FT) and in treated trees it was 1.7 nmoles\u2219g\u22121 of FT, which represents a 5.6-fold increase. The increase in IAA was completely inhibited by lycorine, BSO, CHS-828 and auranofine commercial kappa2 carrageenan were solubilized in 2 L of water at 60 \u00b0C. Concentrated HCl (36.2 N) was added to reach a final concentration of 0.1 N, the solution was incubated for 45 min at 60\u00b0 and then NaOH 1 M was added to obtain pH 7. A sample of 10 \u00b5L of the depolymerized carrageenan corresponding to oligo-carrageenan (OC) kappa was analyzed by electrophoresis in an agarose gel (1.5% w/v) using 100 V for 1 h and dextran sulphate of 8 and 10 kDa as standards . The gel was stained with 15% w/v Alcian blue dye in 30% v/v acetic acid/water for 1 h at room temperature and washed with 50% v/v acetic acid/water for 1 h. OC kappa was visualized as a relative discrete band of around 10 kDa.E. globulus trees with an initial height of 30 cm (n = 10 for each group) were cultivated outdoors in plastic bags containing compost. E. globulus trees were sprayed in the upper and lower part of leaves with 5 mL per plant of water/methanol 9:1 v/v , an aqueous solution of OC kappa at a concentration of 1 mg\u2219mL\u22121 , a water/methanol solution of 250 \u03bcM CHS-828, an inhibitor of nicotinamide phosphoribosyl transferase [\u22121. Trees of treated groups 2, 3 and 4 were treated twice with CHS-828, auranofine, lycorine or BSO and after two weeks they were treated with OC kappa once a week, four times in total, and cultivated without any additional treatment for four months. Leaves were obtained from the middle part of control and treated trees, and pooled into three groups to perform further analysis (n = 3). The height of trees was determined using a measuring tape.nsferase and of Nnsferase and of Ansferase and of Gnsferase , and witEucalyptus leaves to determine the optimal concentration of each inhibitor (data not shown). In addition, it was determined that the optimal concentration of CHS-828 decreased NADPH content, the optimal concentration of lycorine inhibited galatonolactone dehydrogenase (GLDH) activity, the optimal concentration of BSO inhibited \u03b3-glutamylcysteine synthase (\u03b3-GCS) activity, and the optimal concentration of auranofin inhibited TRR activity at four months of culture without additional treatment.It is important to mention that different concentration CHS-828, lycorine, BSO and auranofin were sprayed on et al. [Eucalyptus leaves (1 g of FT) were obtained from the middle part of control and treated Eucalyptus trees (n = 3 for control and treated groups). Leaves were homogenized with liquid nitrogen in a mortar and 4 mL of 30% (v/v) isopropanol-15 mM HCl were added. The mixture was shaken at 4 \u00b0C for 30 min to extract plant hormones. Two mL of dichloromethane were added; the mixture was shaken at 4 \u00b0C for 30 min and centrifuged at 14,000 rpm for 15 min, and the alcoholic solution containing hormones was recovered. The alcoholic supernatant was concentrated using a flux of nitrogen (g) to reach a final volume of 180 \u00b5L.Plant hormones were extracted and analyzed as described in Pan et al. with som\u22121 at room temperature. The elution was done using linear steps of 0 to 2 min 30% of B, 2 to 20 min increasing to 100% of B, 20 to 22 min with 100% of B and 22 to 25 min reducing to 30% of B. The MS/MS detection was performed using a multiple monitoring reaction mode, \u22124500 V, 25 psi and 10 L\u00b7min\u22121 of nitrogen flow. Twenty \u00b5L of a mixture of deuterated standards for IAA, GA1, GA3, GA4, t-Z, iP, dZ and EB at a concentration of 50 ng\u2219mL\u22121 were added to each sample. For the detection in the negative mode, the mass-to-charge (m/z) ratio of IAA and GA3 were = 15.5) and , respectively, and in the positive mode the m/z ratios for iP, t-Z, dZ and EB were , and , respectively. For the detection of internal standard in a negative mode the m/z ratios for d5-IAA and d2-GA3 were (347.1 \u2192 142.7) and (347.1 \u2192 142.7), respectively, and in the positive mode the m/z ratios for d6-iP, d5-t-Z, d5-dZ and d3-EB were (210 \u2192 142), (225.2 \u2192 136.2), (225.2 \u2192 136.2) and (485 \u2192 207), respectively.Plant hormones were detected and quantified using an HPLC-ESI-MS/MS system . The mobile phase was prepared with 0.1% of formic acid (A) and 0.1% of formic acid in methanol (B). A sample of 20 \u00b5L was separated using a C18 reverse phase column with a flow rate of 0.3 mL minT) at 95% confidence interval (p < 0.05). Requirements of normality and homogeneity of variance were tested using Kolmogorov-Smirnov and Bartlett Tests, respectively [Significant differences were determined by two-way analysis of variance (ANOVA) followed by Tukey\u2019s multiple comparison tests (ectively . Mean va3 and t-Z levels explaining, at least in part, the stimulation of growth induced by OC kappa in Eucalyptus trees.In this work, we showed that a reducing redox status due to the increase in ASC, GSH and NADPH levels and the activation of the TRR/TRX system determine the increase in growth-promoting hormones IAA, GA"} +{"text": "Nature Genetics demonstrated that extrachromosomal DNA (ecDNA)-based oncogene amplification frequently occurs in most cancer types and that it is different from chromosomal amplification.1 The authors further showed that ecDNA in multiple cancer types leads to poor outcomes in patients; thus, ecDNA is a novel potential diagnostic or therapeutic target for tumor treatment show high expression levels of mRNA transcripts.2 Furthermore, researchers have demonstrated that (1) ecDNA is circular, (2) ecDNA drives the expression of massive oncogenes, (3) ecDNA contains highly accessible chromatin, and (4) ecDNA has a significantly greater number of ultra-long-range interactions with active chromatin.5 Although ecDNA-based amplification has been shown to promote intratumoral genetic heterogeneity and accelerated tumor evolution, its frequency and clinical impact are still unclear.Previously, Mischel et al. found that there were many cyclic ecDNAs in human tumor cells that played a key role in the rapid evolution of tumors and in their defense against threats such as chemotherapy, radiation, and other therapies.1 Earlier, Paul S. Mischel et al. developed the ECdetect tool to conduct unbiased integrated ecDNA analyses of WGS datasets from 17 cancer types and showed that ecDNA was detected in nearly half of the samples.2 In this recent study, ecDNA-based circular amplicons were found in 25 of 29 cancer types analyzed; in particular, aggressive histological cancers harbored a high frequency of amplicons.1 All these results demonstrate that ecDNA amplification can be defined as a feature of multiple cancer subtypes.In order to perform a global survey on the frequency of ecDNA-based oncogene amplification, the authors performed ecDNA prediction from whole-genome sequencing (WGS) data using AmpliconArchitect based on three characteristic properties of ecDNA: its circular nature, its highly amplified characteristic, and its lack of a centromere. After analyzing WGS datasets from 3212 tumor samples and 1810 non-neoplastic ones, the authors found that, among all tumor samples, approximately 14.3% carried one or more circular amplicons, which shows that ecDNA-based amplification is a common event in cancer. However, almost no circular amplifications were detected in matched whole-blood or normal tissue samples.1 Furthermore, the authors found that ecDNA was formed through a random process, and most circular amplicon breakpoints showed no or minimal sequence homology, implicating nonhomologous end-joining (NHEJ) in ecDNA-associated breakpoint repair. Subsequently, the authors revealed that ecDNA formation can result from chromothripsis, which is associated with NHEJ. In order to examine the transcriptional consequences of circular ecDNA amplification, the authors investigated the correlation between DNA copy number (CN) and oncogene expression level. They found that ecDNA amplifications resulted in higher levels of oncogene transcription compared to CN-matched linear DNA.1 This is consistent with recent findings that oncogenes amplified on ecDNA have markedly increased numbers of transcripts in cancer cell lines and clinical tumor samples.5 To compare the epigenetic mechanisms governing gene expression between circular amplifications and noncircular regions, ATAC-sequencing profiles from 36 samples were analyzed. The authors revealed that enhanced chromatin accessibility played key roles in the dysregulation of ecDNA oncogenes and that ecDNA amplifications more frequently resulted in transcript fusions.Next, the authors found that the chromosomal distribution of the 579 circular amplicons was highly nonrandom. Then, they analyzed the association between 24 most recurrent amplified oncogenes and circular amplicons, and found that 38% of these genes were most frequently present on ecDNA amplicons.1 Together, these data imply that ecDNA amplifications may provide tumors with tolerance to clinical treatment and aid them in escaping from barriers to evolution.Furthermore, the authors used the lymph node status and gene expression signatures (increased tumor cell proliferation and reduced immune cell infiltration) to investigate the association between ecDNA and aggressive biological features. Their results revealed that ecDNA amplification groups showed more spread to lymph nodes at initial diagnosis, higher cellular proliferation scores, and lower immune infiltration scores. Most importantly, the authors found that the 5-year survival rate was significantly lower in patients whose tumors harbored ecDNA amplification, demonstrating that the presence of ecDNA was associated with aggressive cancer behavior. To illustrate how survival is related to disease subtypes, the authors used a multivariate Cox proportional-hazards model to test survival after controlling for disease subtype and found that circular amplification resulted in significantly higher hazard ratios.1 Oncogenes encoded on ecDNA untether themselves from their chromosomal constraints, which endows tumors with the ability to rapidly change their genomes in response to changing environments, thereby accelerating tumor evolution and contributing to therapeutic resistance. Because ecDNAs widely exist in different types of tumors, deep dives into the basic structure and function of ecDNAs will aid in understanding the mechanism underlying their biogenesis, replication, and trafficking. More importantly, it is promising to find new diagnostic makers depending on the presence of ecDNAs in tumors, as well as to develop innovative therapeutic strategies that can target ecDNA to interfere with their ability to drive tumor growth, drug resistance, and recurrence.In summary, this study revealed that circular ecDNA plays a key role in the development and progression of cancer by not only facilitating enhanced chromatin accessibility and the transcription of oncogenes but also adversely affecting the survival and outcomes of cancer patients."} +{"text": "Aim: The aim of the study was to investigate the early neurovascular alterations of the retina in radiation encephalopathy (RE) patients with normal-ranged visual acuity after radiotherapy for nasopharyngeal carcinoma.Methods: Fifty-five RE patients and 54 healthy age-matched subjects were enrolled in this retrospective cross-sectional case\u2013control study. The best corrected visual acuity (LogMAR) of the included eye should not be more than 0. The vessel density and thickness of different locations in the retina were acquired automatically using optical coherence tomography angiography (OCTA). The data were then compared between the RE patients and the controls. The location included the whole retina, the superficial vascular plexus (SVP)/the ganglion cell complex (GCC), the deep vascular plexus (DVP), and the choroid in the macular area, as well as the inside disc and peripapillary area in the optic nerve head (ONH). The risk factors in OCTA retinal impairments were analyzed using a backward multiple linear regression. The relationships between mean deviation (MD) and pattern standard deviation (PSD) in the visual field (VF) and the OCTA parameters were also analyzed in RE patients.Results: The vessel density of the GCC was significantly reduced in RE patients compared with controls (p = 0.018), and the reductions were mainly shown in the parafoveal (p = 0.049) and perifoveal fields (p = 0.006). The thickness of the GCC was correspondingly reduced . In addition, the sub-foveal choroidal thickness (p = 0.039) was also reduced in RE patients. The vessel density of the GCC (R2 = 0.643) and DVP (R2 = 0.777) had a significant positive correlation with high-density lipoprotein cholesterol (HDL-C) and apolipoprotein A1 (ApoA1) and had a significant negative correlation with age . The vessel density of the GCC also had a significant negative correlation with apolipoprotein B (ApoB) . In the VF, MD had a significant positive correlation with the vessel density inside disc , whereas PSD showed a significant negative correlation with the vessel density inside disc and the average GCC thickness, respectively .Conclusion: With the aid of OCTA, we found that neurovascular alterations of the retina may exist in RE patients with normal-ranged visual acuity. Herein, we suggest the implementation of OCTA to assist ophthalmologists in the early detection and consistent monitoring of radiation-related eye diseases to avoid delayed diagnosis. Nasopharyngeal carcinoma (NPC) has a higher incidence in Southeast Asia, particularly in South China . CurrentSevere radiation-induced eye complications, such as RON and RR, are characterized by irreversible neural and microvascular impairments , 7. MostHerein, the major purpose of this study was to use OCTA to investigate the neurovascular alterations of the retina in RE patients with normal-ranged visual acuity after RT for NPC. Moreover, the relationships between visual field (VF) and OCTA neurovascular measurements were also analyzed in the RE group.All procedures followed were in accordance with the ethical standards of the responsible committee on human experimentation and with the Helsinki Declaration of 1975, as revised in 2008 (5). Informed consent was obtained from all patients included in the study. Approval for the study was obtained from the ethics committee of Sun Yat-sen Memorial Hospital, Sun Yat-sen University.Fifty-five RE patients were recruited from the neurology and ophthalmology departments of Sun Yat-sen Memorial Hospital between January 2017 and January 2019 in this retrospective cross-sectional clinical study. Fifty-four healthy age-matched subjects were included as controls. The major inclusion criteria included a history of RT due to NPC and a diagnosis of RE in the patients. A diagnosis of RE was provided by a neurologist. The eye laterality of the affected cerebrum side was chosen, and if bilateral RE existed, the eye laterality of the more serious cerebrum side was selected. Thorough ophthalmic examinations, including best corrected visual acuity (BCVA), refractive error, intraocular pressure (IOP), axial length, dilated fundus examination, visual evoked potential (VEP), 30-2 VF testing , and OCTA , were performed in RE patients when a diagnosis of RE was confirmed. IOP was measured using the Canon TX-20 non-contact tonometer . VF testing was performed by two trained optometrists. The test was repeated when unreliable indices existed or patient had not understood the instructions during any step of the test procedure. Unreliable indices included a false positive/negative error score of more than 10% or a fixation loss score of more than 15% . An EBRT setup, delivered in 2 Gy per day and 5 days per week, mainly consisted of opposing lateral photon fields (6\u20138 MV) to treat the nasopharynx and the upper neck. During the RT process, a cerrobend was applied to restrict the radiation area. The overall dose for all patients was about 65 Gy (60\u201375 Gy) to the nasopharyngeal primary site and 70 Gy to the metastatic lymph nodes.The magnetic resonance imaging (MRI) images of RE patients were acquired. The RE lesion volume was detected by using T2-weighted fluid-attenuated inversion recovery and was independently assessed by a radiologist. The RE lesion volume = the largest lesion area \u00d7 the number of T2 MRI images with lesions . A radioOCTA images were obtained using AngioVue software 2.0 of the RTVue XR Avanti device . A speed of 70,000 A scans per second and a split-spectrum amplitude-decorrelation angiography (SSADA) algorithm were applied in the scan. Images were excluded when the scan quality was < 6 or obvious artifacts were detected by a senior ophthalmologist's thorough check. The vessel density of the macular area and the optic nerve head (ONH) area was assessed in Angio Retina mode (6 \u00d7 6 mm) and Angio Disc mode (4.5 \u00d7 4.5 mm), respectively. Foveal avascular zone (FAZ) was automatically obtained based on the superficial vascular plexus (SVP) image in Angio Retina mode (6 \u00d7 6 mm). The retinal thickness and retinal nerve fiber layer (RNFL) thickness were assessed using the Retina Map mode and ONH mode. The average ganglion cell complex (GCC) thickness, global loss volume (GLV), and focal loss volume (FLV) were calculated in the GCC mode. The above measurements were automatically exported from AngioVue 2.0. In addition, sub-foveal choroidal thickness (SFCT) defined as the distance between the outermost edge of the retinal pigment epithelium (RPE) and the sclera\u2013choroidal border was measured manually. The average SFCT was defined as the average value of the horizontal and vertical SFCT. Major OCTA measurements were explained in t-tests were performed to compare normally distributed data between RE patients and the controls. Categorical variables were analyzed with a chi-squared test. In RE patients, backward multiple linear regression analyses were performed between the major OCTA measurements [whole image vessel density/thickness of the GCC/deep vascular plexus (DVP) and the radial peripapillary capillary plexus] and the other factors including age, the RE lesion volume, radiation dose, RE symptoms onset time, systolic and diastolic blood pressure, HbA1c, total cholesterol (TC), triglyceride (TG), high-density lipoprotein cholesterol (HDL-C), low-density lipoprotein cholesterol (LDL-C), apolipoprotein A1 (ApoA1), and apolipoprotein B (ApoB) in order to assess the risk factors of retinal impairments. Moreover, a backward multiple linear regression was applied between the above OCTA measurements and the VF parameters including mean deviation (MD) and pattern standard deviation (PSD).The statistical analyses were performed using SPSS 24.0 . Independent two-tailed Student's Fifty-five RE patients and 54 age-matched healthy controls were included in this study. There was no statistically significant difference with regard to age, gender, hypertension, diabetes mellitus, eye laterality, or BCVA between RE patients and the controls . The IOPp = 0.018), whereas the density in the deep layer [inner plexiform layer\u2013outer plexiform layer (IPL\u2013OPL)] and the density near the ONH were not significantly reduced in RE. Moreover, no significant difference was shown in the FAZ between two groups, but the average FAZ area in the RE group was larger than that in the control group. Detailed data are shown in With regard to vessel density, the whole image density of the macular area in the GCC [internal limiting membrane\u2013inner plexiform layer (ILM\u2013IPL)] was significantly reduced in RE patients compared with the control group . SFCT and inferior GCC thickness were also decreased in RE patients . Besides, GLV was also noticeably increased in RE patients . Detailed data are shown in Significantly reduced thickness of the GCC (ILM\u2013IPL) and outermost retinal layers [retinal pigment epithelium\u2013Bruch's membrane (RPE\u2013BRM)] was found in RE patients . Similarly, the vessel density of the DVP showed a significant positive correlation with HDL-C, ApoA1, diastolic blood pressure, and TG and showed a significant negative correlation with age and TC . However, the retinal thickness of the macular, peripapillary vessel density, and RNFL thickness did not show any correlation with the other factors. Thus, it seemed that TG, HDL-C, ApoA1, and diastolic blood pressure were major protective factors of retinal vessels, whereas age, LDL-C, ApoB, and TC were risk factors of retinal impairments.The backward multiple linear regression analyses between the major OCTA measurements and the other factors revealed that the vessel density of the GCC and DVP was correlated to these factors. The vessel density of the GCC had a significant positive correlation with TG, HDL-C, and ApoA1 and had a significant negative correlation with age, LDL-C, and ApoB , whereas PSD showed a significant negative correlation with the vessel density inside disc and the average GCC thickness, respectively . No significant correlation was observed between PSD (or MD) and the rest of the measurements . The VF defects were correspondent to the location where the vessel density and GCC thickness were reduced had a normal VF, whereas 60% (33/55) had an abnormal VF. A backward multiple linear regression was applied between the above OCTA measurements and the VF parameters (MD and PSD) in RE patients. MD had a significant positive correlation with the vessel density inside disc of venules, arterioles, and capillaries, ultimately blocking the vessels and decreasing the densities in both the SVP and DVP . The SVPAnother possible explanation is layer segmentation errors caused by irregular boundaries and shapes of the capillary networks. Projection artifacts from other layers may also be a reason for the inapparent reduction in the DVP. To avoid projection artifacts, Sellam et al. considerMoreover, the tumor type, as well as the RT methods, may affect the vessel density , 23, 24.Another finding in the RE cohort was that the GCC vessel density reductions were mainly shown in the parafoveal and perifoveal fields. As the fovea lacks the structure of the GCC and vessThe significant increment of GLV suggested a primarily diffused impairment in the retinal ganglion cells. Besides, interestingly, reduced GCC thickness, which means neural impairment, was prominent in the inferior section in the RE group. This might be due to the close proximity of the radiation area, as the nasopharynx is located below the horizontal surface of the eyeball .The most interesting finding in the risk factor analyses was that serum lipid-related items were much related to retinal vessel impairment. A higher level of ApoB and a lower level of ApoA1 may contribute to the risk of RE-related retinal diseases. ApoB releases proinflammatory factors and is highly involved in atherogenesis, whereas ApoA1 has anti-inflammatory effect and initiates reverse cholesterol transport from the vessels to the liver . A high Unlike glaucoma and optic neuritis, most of the VF defects patterns were irregular and lacked distinguishable characteristics. Ozkaya et al. found thR2 = 0.437) was better than that between OCTA measurements and MD (R2 = 0.241). Similarly, Liu and his colleagues for the publication of any potentially identifiable images or data included in this article.ZL: data analysis and manuscript drafting. ZZ: data collection and image editing. JX and YL: manuscript design and manuscript polishment. All authors read and approved the final version of the manuscript.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Collecting parallel sentences from nonparallel data is a long-standing natural language processing research problem. In particular, parallel training sentences are very important for the quality of machine translation systems. While many existing methods have shown encouraging results, they cannot learn various alignment weights in parallel sentences. To address this issue, we propose a novel parallel hierarchical attention neural network which encodes monolingual sentences versus bilingual sentences and construct a classifier to extract parallel sentences. In particular, our attention mechanism structure can learn different alignment weights of words in parallel sentences. Experimental results show that our model can obtain state-of-the-art performance on the English-French, English-German, and English-Chinese dataset of BUCC 2017 shared task about parallel sentences' extraction. Parallel sentences are a very important linguistic resource which comprises much text in the parallel translation of different languages. A large parallel corpus is crucial to train machine translation systems which can produce good quality translations. As is well known, the major bottleneck of statistical machine translation (SMT) and neural machine translation (NMT) is the scarceness of parallel sentences in many language pairs \u20133. With https://stanfordnlp.github.io/CoreNLP/) can be implemented to extract English persons, locations, and organizations, while there are no open-source tools to deal with other lingual named entities such as Uyghur. To address those issues, many methods extracted parallel sentences without feature engineering. More recent approaches used deep learning, such as convolutional neural networks . In order to encode the context information of a sentence, we use the following formula to calculate the hidden representation state for the tth time in the source language:\u03b8r,ts is the model parameter for word GRUs. We obtain the context information for a given word wi,j\u2009s by concatenating the forward hidden state In the process, we use From the example of ui,ts as a hidden representation of hi,ts. Then, in order to learn the importance of a word in a sentence, we calculate the similarity of hi,ts with a level context vector uw. Next, we use a softmax function to get a normalized importance weight. Note that uw is a model parameter in the attention mechanism. The context vector uw can be seen as a high-level representation that selects which word is more important for a sentence. After that, we get a state us by a weighted sum of the word annotations based on the weights. We can get a target vector ut by the same method.In the attention process, we first use a one-full-layer perception to learn us and ut, the function networks encode sequence vectors. We concatenate the forward GRU and the backward GRU to obtain the hidden states for each input vector.At the bilingual level, after combining the intermediate vectors Wc is a value matrix and bc is the bias term for the classification layer. For the classification problem, we usually use the cross-entropy as a loss.In this section, we should detect whether a sentence pair is parallel or not from the top neural network. In order to achieve this goal, we employ a softmax layer to classify parallel sentences. The basic process is that it maps the multiple outputs of the encode layer into an interval . In this paper, we treat the classifying parallel sentence as a binary classification problem. We input the source and target sentences into the encode layer. The encoder layer outputs a state vector u into the classification layer. For the classification layer, we use the following formula that maps the input into the interval . It is obvious that the output of the classification layer is a probability.\u03d5 to stand for the binary cross-entropy. Then, we use the gold label li and predicted label li\u2032 for a pair of a sentence i to optimize the loss. The final objective can be minimized with stochastic gradient descent (SGD) or variants such as Adam to maximize classification.We use In this section, we assess the effectiveness of our model. We compare our method with multiple settings. As we want to improve the performance of our model, we artificially construct negative samples.Hangya and Fraser showed tk\u03f5{0,1,\u2026, 9}, such that with k=0 and k=9, a model is respectively trained on the dataset with a positive to negative sentence pairs ratio of 100% and 10%.Gregoire and Langlais showed thttps://comparable.limsi.fr/bucc2017/cgi-bin/download-data.cgi) to train our model. For test sets, we use the BUCC'17 English-French, English-Chinese, and English-German datasets (https://comparable.limsi.fr/bucc2017/cgi-bin/download-test-data.cgi). Each testing dataset contains two monolingual corpora. The monolingual corpora contain about 100\u2009k\u2013550\u2009k sentences and 2,000\u201314,000 sentences are parallel. For the convenience of researchers, BUCC 2017 provided us with an evaluation script and a gold standard data to calculate the precision, recall, and F-score. For Chinese, we use OpenCC (https://github.com/BYVoid/OpenCC) to normalize characters to be simplified and then perform Chinese word segmentation and POS tagging with THULAC (http://thulac.thunlp.org). The preprocessing of English, French, and German involves tokenization, POS tagging, lemmatization, and lower casing which we carry out with the NLTK (http://www.nltk.org) toolkit. The statistics of the preprocessed corpora are given in To implement experiments, we use the BUCC'17 English-French, English-Chinese, and English-German parallel datasets Multilingual sentence embeddings (MSE) Dual conditional cross-entropy (DCCE) An LSTM recurrent neural network (LSTM) We compare our model to four baselines (the parameters of the baselines follow their authors):The first baseline (ME) is the traditional statistics-based approach that is conventionally considered as alignment features between two sentences. The alignment features mainly conclude the number of connected words, the top three largest fertilities, and the length of the longest connected substring. We use those features to construct a maximum entropy classifier according to Munteanu et al. This method mainly relied on feature engineering. Feature engineering usually suffers from the language diversity issue.The second baseline (MSE) is an important contribution of this type to approach that mentioned in . First, The third baseline (DCCE): this work proposed dual conditional cross-entropy to extract parallel sentences. This work used the computed cross-entropy scores based on training two inverse translation models on parallel sentences. This method requires additional computational resources to train the translation model.The final baseline (LSTM) is based on bidirectional recurrent neural networks that can learn sentence representations in a shared vector space by explicitly maximizing the similarity between parallel sentences. This method does not distinguish the various weights of words in detecting parallel sentences. These end-to-end network models do not add attention to encode and do not learn complex mappings and alignments to quantify parallel information.Compared to the baselines, the PHAN first is independent of feature engineering. It makes the PHAN universal and is easy to apply the PHAN into multiple languages. Moreover, the PHAN uses a parallel hierarchical attention mechanism to capture the deep representation of monolingual and parallel bilingual sentences.F1 scores of three language pairs.In this section, we first give the overall performance of different models. From We further analyze the performance of PHAN to observe which model can make it perform better than that without the attention mechanism. Alignment is an important factor in identifying parallel sentences. If the weights of alignment are not important, the neural network without attention mechanism may also effectively detect parallel sentences since all alignments have the same contribution. However, the alignment deeply depends on linguistics and context \u201325. For We can visualize alignments for some sample sentences and observed translation quality as an indication of an attention model. In order to test that our model is able to mine various informative alignments in parallel sentences, we use this method to make the analysis. To test whether our model can better capture alignments than LSTM without a parallel attention mechanism, we plot the distribution of the attention weights of the words in three language bilingual sentences. The results are shown in Figures We further explore the language differences and their impact on detecting parallel sentences. We manually extract English-Chinese and English-French parallel sentences to discuss language differences. Example 1 is extracted by the PHAN, but the other baselines miss it. From https://linguatools.org/tools/corpora/wikipedia-comparable-corpora/) corpus. Then, we add the obtained parallel sentences into the three original training data as the new training set for machine translation. To evaluate the translation performance of machine translation, we use the well-known BLEU score. We use phrase-based systems that are trained with Moses for the SMT system. To train the NMT systems, we use OpenNMT (https://github.com/OpenNMT/OpenNMT-py) system.In this paper, we hope to obtain parallel sentences and improve the performance of the machine translation system. In the training machine translation system, we use the BUCC'17 English-French, English-Chinese, and English-German parallel datasets as baselines. We use our model to extract parallel sentences from Wikipedia (http://www.statmt.org/moses/) and NMT (https://opennmt.net/) approaches. The baseline systems are trained with BUCC'17 English-French, English-Chinese, and English-German parallel sentences. For the remaining compared systems, we sort the extracted parallel sentence pairs by an extraction system in descending order according to the threshold values and append the top of {20000, 50000,\u2026, 500000} and append the extracted parallel sentence pairs to the original training dataset. We change different numbers of extracted parallel sentences to train the machine translation system to test the stable performance of our model.We trained 48 machine translation systems for each SMT (In this paper, we explore a new parallel hierarchical attention network to extract parallel sentences. Our system is able to obtain state-of-the-art performance in filtering parallel sentences while using less feature engineering and preprocessing. Additionally, our model can make full use of monolingual and bilingual sentences. Moreover, we propose a parallel attention mechanism to learn various alignment weights in parallel sentences. In the experiments, we show that our model obtains a state-of-the-art result on the BUCC2017 shared task. In particular, the effectiveness of our model in using the obtained parallel sentences to implement machine translation tasks is demonstrated.BPE and similar methods can effectively help us solve the out-of-vocabulary issue. We will use BPE to improve its performanceOur model needs parallel sentences to be trained, which can be problematic in low-resource language pairs. In order to lessen the need for parallel sentences, identifying parallel sentences via minimum supervision is a promising avenue, especially in low-resource language pairsIn the future, we will explore the following directions:"} +{"text": "Helicobacter pylori infection and/or gastric disorders and chronic kidney disease (CKD) has not been elucidated. We investigated the relationship between Helicobacter pylori and/or atrophic gastritis (AG) and chronic kidney disease. In total, 3560 participants (1127 men and 2433 women) were eligible for this cross-sectional study. We divided participants into four study groups: with/without Helicobacter pylori infection and with/without AG. The HP (+) AG (\u2212) group demonstrated a significant association with CKD compared with the HP (\u2212) AG (\u2212) group . In contrast, the HP (+) AG (+) group showed significantly lower adjusted odds of CKD than the HP (\u2212) AG (\u2212) group . H. pylori infection without AG might be associated with CKD in these participants. Conversely, the HP (+) AG (+) group had lower odds of CKD. Uncovering an association between gastric and renal conditions could lead to development of new treatment strategies.The relationship between Helicobacter pylori (H. pylori) infection is associated with the metabolic syndrome and abnormal lipid profiles [H. pylori infection and insulin resistance [H. pylori infection and AG are useful for risk assessment of osteoporosis [H. pylori infection and/or AG affect not only the stomach but also other organs.The stomach is reported to be associated with conditions affecting other organs. Such as, an association between atrophic gastritis (AG) and coronary artery disease has been described ,2, with profiles ,5, whilesistance ,7, liversistance and ventsistance . In addioporosis . These sH. pylori had a higher risk of subsequent renal dysfunction than those not infected [H. pylori infection rate is lower in patients with peptic ulcer disease and concomitant chronic kidney disease (CKD) than in those without CKD [H. pylori infection and/or gastric disorders and CKD has not been elucidated. Therefore, in this study, we investigated the relationship between H. pylori and/or AG and CKD, in order to determine whether an association exists between the stomach and kidneys.A previous study reported that individuals infected with infected . Convershout CKD . HoweverH. pylori eradication therapy (n = 530) or who had undergone gastrectomy (n = 30). Further, participants who used a proton-pump inhibitor (n = 112), or who had missing medical information data (n = 105), were excluded. This left 3560 participants (1127 men and 2433 women) eligible for analysis. This cross-sectional study included 4337 individuals enrolled in the Japan Multi-Institutional Collaborative Cohort Study in the Kyoto area, from 2011\u20132013. The Japan Multi-Institutional Collaborative Cohort Study is a new cohort study, launched in 2005, to examine gene\u2013environmental interactions in lifestyle-related diseases [2) = 194 \u00d7 creatinine\u22121.094 \u00d7 year\u22120.287 (for men) and eGFR (mL/min/1.73 m2) = 194 \u00d7 creatinine\u22121.094 \u00d7 year\u22120.287 \u00d7 0.739 (for women) [2). Anthropometry data was obtained during the clinical examination. Medical history and medication use were assessed by means of a questionnaire. Hypertension was defined as a resting systolic blood pressure \u2265 140 mmHg or if receiving medication, diabetes mellitus (DM) as HbA1c \u2265 6.5% or if receiving medication, dyslipidemia as LDL-cholesterol \u2265 140 mg/dL and HDL-cholesterol < 40 mg/dL or if receiving medication, and anaemia as haemoglobin \u2264 13.0 g/dL in men and \u2264 12.0 g/dL in women.The study evaluated medical information obtained via self-administered questionnaires. Metabolic equivalents (METs) were assessed, as previously reported . Furtherr women) . The preH. pylori antibody (HP) positivity and levels of serum PG. This method has recently been used in Japan for gastric cancer screening of high-risk individuals [H. pylori antibodies (HP) and PG levels. AG was defined according to the serum PG I and II criteria proposed: when a participant fulfilled the criteria of both serum PG I value \u2264 70 ng/mL and PG I/II ratio \u2264 3.0, he or she was diagnosed as having AG. Infection with H. pylori was diagnosed using a microplate enzyme immunoassay kit [H. pylori, i.e., HP (+). The control group consisted of participants who tested HP (\u2212) AG (\u2212); while the other three groups consisted of participants who tested HP (+) AG (\u2212), HP (+) AG (+), or HP (\u2212) AG (+).Participants were divided into four study groups according to a combination of serum anti-ividuals ,17,18. I, Japan) . Serum sp values < 0.05 were considered statistically significant. Continuous variables were expressed as means \u00b1 standard deviations (SD), and categorical data were expressed as sums and percentages. Inter-group comparisons were performed using the one-way analysis of variance for continuous variables, or the chi-squared test for categorical variables. Categorical variables included sex, alcohol use, smoking, hypertension, DM, dyslipidemia, stroke, myocardial infarction and/or stenocardia, and anaemia. Odds ratios (ORs) and 95% confidence intervals (CI) were calculated using logistic regression methods in which CKD was the dependent variable and year, sex, body mass index (BMI), METs, smoking, alcohol use, hypertension, DM, dyslipidemia, stroke, myocardial infarction and/or stenocardia, and anaemia were the independent variables.3.Analyses were performed using SPSS statistical software (PASW 25.0). For all analyses, H. pylori infection and AG. The mean age of the control group was 50.0 years, vs. > 54.0 years for the other groups. H. pylori infection and diagnosis of AG. Relative to the control group, the HP (+) AG (\u2212) group had an adjusted OR for CKD of 1.465 adjusted for yeae and sex; 1.439 adjusted for year, sex, BMI, METs, alcohol use, and smoking; and 1.443 adjusted for year, sex, BMI, METs, smoking, alcohol use, hypertension, stroke, dyslipidemia, myocardial infarction and/or stenocardia. There was no significant difference in the adjusted odds of CKD between the control group and the HP (+) AG (\u2212) group\u2014adjusted for lifestyle factors or medical history . In contrast, we compared the HP (+) AG (+) group and HP (\u2212) AG (\u2212) group, the HP (+) AG (+) group had lower an adjusted OR for CKD of 0.610 adjusted for year and sex. There was no significant difference in the odds of CKD between the control group and the HP (+) AG (+) group, adjusted for lifestyle factors or medical history. On the other hand, the HP (\u2212) AG (+) group showed no significant association with CKD.H. pylori infection not only affects the stomach, but is associated with other risk factors for CKD: hypertension, metabolic syndrome, DM, cardiovascular disease, changes in lipid profile [H. pylori infection and/or AG.Associations between two or multiple organs are essential for the human body to maintain homeostasis and to function normally. Elucidating these associations may be important for clinical decision-making and the development of appropriate treatments for some diseases. With knowledge of interactions between organ systems, we can treat diseases focusing not only on the symptomatic organ, but also on the organ fundamentally causing the disease. Moreover, such knowledge should allow for prediction and prevention of other related-organ disorders when a patient suffers from a particular organ disorder. Indeed, profile . In thisH. pylori infection and CKD and revealed no association between H. pylori infection and either non-dialysis [H. pylori infection in patients with CKD and concluded that there is a lower prevalence of H. pylori infection in patients with CKD [H. pylori infection is still conflicting. Given that H. pylori inhabits the stomach, it is necessary to consider the state of AG when demonstrating the relationship between H. pylori infection and CKD. Although it has been that the risk factor for peptic ulcer are H. pylori infection [H. pylori infection is lower in individuals with peptic ulcer disease and concomitant CKD than in those without CKD [H. pylori infection. Indeed, in the present study, the HP (+) AG (\u2212) and HP (+) AG (+) groups demonstrated an inverse association with CKD. If we examine only the presence or absence of H. pylori infection and the risk of CKD without considering the presence or absence of AG, the results are inconsistent. Furthermore, in the 50\u201359-year-old age stratum, the HP (+) AG (\u2212) group had a CKD prevalence approximately double that of the control group. CKD prevalence and H. pylori infection are strongly associated with age [H. pylori infection without AG may have been accelerated the onset of CKD in the 50\u201359-year-old age stratum.Previous meta-analyses have reported that association of dialysis or dialydialysis . On the with CKD . The repnfection , the rathout CKD . Thus, iwith age ,25. ThisH. pylori infection reduces ghrelin secretion. Ghrelin, a gastrointestinal peptide hormone, is known to perform a plethora of central and peripheral actions in distinct areas, including gut motility, gastric acid secretion, and glucose metabolism [H. pylori infection is associated with decreased ghrelin production and with a reduction in the number of ghrelin-producing cells [H. pylori infection with/without AG. Furthermore, there was no difference in the relationship between CKD and lifestyle factors and medical history between the control and HP (+) AG (\u2212) group. In sum, H. pylori infection may induce renal dysfunction via reduction of ghrelin, and further studies should be conducted to confirm this hypothesis AG (\u2212) status is higher in CKD and may have relationship with the risk of CKD, an effect that has been speculated by one mechanism: that tabolism . There atabolism ,28. The tabolism ,30. H. png cells ,32. In cng cells . This suH. pylori infection. A prospective study is warranted to assess the relationships between CKD and H. pylori. Although the study evaluated medical information obtained via self-administered questionnaires, some anti-diabetes drugs used for other reasons or for pre-diabetes, like metformin and we can\u2019t detect diabetics from non-diabetics. Furthermore, we could not show direct evidence of an interaction between ghrelin levels and H. pylori infection and/or AG status. In addition, CKD is generally defined in terms of proteinuria and eGFR; however, we had limited data on proteinuria. Last, we diagnosed H. pylori infection via serum antibody detection, whereas the gold standard is gastric-based testing.The limitations of the present study are mainly related to the study design and lack of measured ghrelin and proteinuria. The present study was cross-sectional; thus, we could not investigate the incidence of CKD in relation to H. pylori, an important pathogenic factor in the stomach, is probably involved in the development of many other diseases. Uncovering the association between gastric and renal conditions could lead to the development of new treatment strategies. In theory, if a patient suffers from H. pylori infection, we could predict renal disorders and potentially prevent their development by treating the stomach, for example, by administering H. pylori eradication treatment.We demonstrated that the prevalence of HP (+) AG (\u2212) status is higher in CKD and may have relationship with the risk of CKD."} +{"text": "The robustness of networks against node failure and the response of networks to node removal has been studied extensively for networks such as transportation networks, power grids, and food webs. In many cases, a network\u2019s clustering coefficient was identified as a good indicator for network robustness. In ecology, habitat networks constitute a powerful tool to represent metapopulations or -communities, where nodes represent habitat patches and links indicate how these are connected. Current climate and land-use changes result in decline of habitat area and its connectivity and are thus the main drivers for the ongoing biodiversity loss. Conservation efforts are therefore needed to improve the connectivity and mitigate effects of habitat loss. Habitat loss can easily be modelled with the help of habitat networks and the question arises how to modify networks to obtain higher robustness. Here, we develop tools to identify which links should be added to a network to increase the robustness. We introduce two different heuristics, Greedy and Lazy Greedy, to maximize the clustering coefficient if multiple links can be added. We test these approaches and compare the results to the optimal solution for different generic networks including a variety of standard networks as well as spatially explicit landscape based habitat networks. In a last step, we simulate the robustness of habitat networks before and after adding multiple links and investigate the increase in robustness depending on both the number of added links and the heuristic used. We found that using our heuristics to add links to sparse networks such as habitat networks has a greater impact on the clustering coefficient compared to randomly adding links. The Greedy algorithm delivered optimal results in almost all cases when adding two links to the network. Furthermore, the robustness of networks increased with the number of additional links added using the Greedy or Lazy Greedy algorithm. Habitat loss and fragmentation due to changes in climate and land use are one of the main drivers of the ongoing global biodiversity crisis \u20135. The lGraph theory provides powerful tools to represent and analyse habitat connectivity in highly fragmented landscapes \u201318. HereThe resilience of networks against node and link removal, also called network robustness, has been studied in a variety of networks, such as transportation networks, power grids, and food webs \u201331. A neThe question we pose in this work is: Where should additional links best be created within a habitat network to maximise its clustering coefficient? We propose an algorithm to identify the missing link of a network that leads to the biggest increase in network robustness when added to the network, by using the clustering coefficient as an indicator. We introduce two different heuristics, a Greedy algorithm and a deIn a last step, we simulate the robustness of habitat networks against habitat loss as proposed by Heer et al. before aWe present the algorithm to update the clustering coefficient after one link is added and the Greedy and Lazy Greedy algorithms to add more than one link. We evaluated the effect of adding links using the proposed algorithms on the clustering coefficient and therefore on the habitat network\u2019s robustness. To evaluate the effect of the proposed algorithms on the clustering coefficient, we added two links to a variety of networks using (1) the Greedy algorithm, (2) the Lazy Greedy algorithm, and (3) a purely random approach. The clustering coefficients of the resulting networks were then compared to the clustering coefficient of the original network as well as the optimal solution, which was found by complete enumeration, i.e. iterating over all pairs of potential links. We tested our algorithms on different network types, including sparse standard networks , dense sFinally, we evaluated the effect of modified habitat networks on metapopulation robustness. To this end, we simulated and evaluated the metapopulation robustness as presented by Heer et al. and studG = be a simple, undirected, loopless network with node set V and link set E \u2282 V \u00d7 V.We use the following notation throughout the manuscript. Let u, v) \u2208 V \u00d7 V \\ E be a pair of unconnected nodes in G. To be able to compare the network G with the extended network that arises from G by adding the link to G, we use the following notation and set G\u2032 \u2254 }). If we want to emphasize the link , we will write G + uv \u2254 G\u2032.Let \u2254 {v \u2208 V : \u2208 E} as the set of neighbours of w, dw \u2254 |N(w)| as the degree, i.e. the number of neighbours, of w in G and w in G\u2032. A triangle in a network G is a clique of three nodes {u, v, w}, i.e. all three nodes are connected with each other by links: , , \u2208 E. We set T(w) \u2254 |{ \u2208 E : u, v \u2208 N(w)}| as the number of triangles in G that involve w and T\u2032(w) as the number of triangles in G\u2032. Furthermore, with N \u2254 N(u) \u2229 N(v) we denote the set of common neighbours of u and v and k \u2254 |N| the number of common neighbours = 0 and thus C(v) = 0.The clustering coefficient of a node G with n \u2254 |V| nodes is defined as the average over the clustering coefficient of its nodes:n \u2254 |V| nodes has an \u03c9 \u2a7d 2.376 do2: \u2003C, ei = MaximizeClustering, 3: \u2003\u2003G = G + e4: \u2003\u2003returne1, \u2026, em5: \u2003m times. However, the solution found by the Greedy algorithm 3 is not necessarily optimal: Consider the network depicted in We can calculate the clustering coefficient and the number of triangles once and then update these numbers. In that case, the Greedy algorithm calculates the solution in m links that have the highest increase in the clustering coefficient if they were to be added individually.For even faster calcuations\u2014at the cost of optimality\u2014we introduce a second heuristic, that iterates over all potential links once and then picks the The Lazy Greedy algorithm executes Algorithm 2 once and sorts the results afterwards. Using quick sort, sorting can be done in log(|E|) , 42. SimAlgorithm 4 Lazy Greedyprocedure LazyGreedy, m)1: CG \u2190 Clustering(G)2: \u2003T \u2190 Triangles(G)3: \u20034: \u2003results \u2190 new Arrayfordo5: \u2003C = UpdateClustering)6: \u2003\u2003u, v, C) to results7: \u2003\u2003append using algorithms from the Python package NetworkX . In randWe created two sets of these standard networks varying in their number of links per network. For sparse standard networks, all parameters were set to create networks with a number of nodes and corresponding links similar to the landscape-based networks. This led to very sparse networks with only 4% of links present. Dense standard networks were also created with a number of nodes similar to the landscape-based networks, however the parameters were chosen such that about 75% of the potential links were present. S3.1 Table in In total, we analysed 250 networks per network type with the number of nodes between 50 and 111.m = 2 links were added to each of the created networks using (1) the Greedy algorithm, (2) the Lazy Greedy algorithm, and (3) a purely random approach. We compared these results with the clustering coefficient of the original network and the optimal solution, which was found by iterating over all pairs of potential links.We evaluated the effect of the two proposed algorithms on the clustering coefficient and compared the results to randomly adding links. To this end In this analysis, we considered both standard and landscape-based networks, as the heuristics to maximize the clustering coefficient can be applied to any network. We defined the set of potential links to be the set of all unconnected pairs of nodes In a last step, we evaluated how much the added links improved the robustness of landscape-based habitat networks against habitat loss. We applied the simulations introduced by Heer et al. to simulWe compared the heuristics Greedy and Lazy Greedy with randomly adding links to the network and added 5 to 30 links in increments of 5. Baseline of these simulations was the robustness of the original habitat networks and we compared the increase of robustness originating from adding links using the different algorithms.As the robustness simulations were specifically designed to evaluate the robustness of metapopulations on habitat networks, we considered the landscape-based habitat networks in this section only. We restrict the set of potential links used in , that ouTo compare the different algorithms, we added two links to the networks using each of the algorithms and calculated the difference in the clustering coefficient between the extended network and the original one. The optimal solution of adding two links to the landscape-based networks increases the clustering coefficient by 0.05 on average. For the sparse networks, the optimal solution resulted in a mean increase between 0.02 (regular networks) to 0.04 . All three dense network types showed no increase in the clustering coefficient after two links were added .Our proposed algorithms Lazy Greedy and Greedy return results close to the optimal solution with Lazy Greedy being slightly worse. For both the Greedy and optimal solution the mean increase in the clustering coefficient was 0.030 over all network types and for the Lazy Greedy solution the mean increase was 0.029.Adding two links randomly decreases the clustering coefficient for almost all landscape-based networks with a mean decrease of 0.15. The clustering coefficient for standard networks (both sparse and dense) remains unchanged by adding two links randomly.For sparse networks, this implies that applying our heuristics to identify new links has a much larger impact on the clustering coefficient compared to the random approach. The same holds for habitat networks, which are usually sparse, leading to the conclusion that both the Greedy and Lazy Greedy heuristic are preferable to randomly adding links to a habitat network. For dense networks, however, adding two links has almost no impact on the clustering coefficient, independent from the considered method. As the majority of nodes in dense networks has a particularly high degree, the impact of an additional link decreases see , which eTo quantify, how close the Greedy and Lazy Greedy algorithms approximate the optimal solution, we compared the clustering coefficient of the optimal solution with that produced by the Greedy and Lazy Greedy algorithm. The Greedy algorithm returned the optimal solution in 97.6% of the 2250 networks and the discrepancy between the clustering coefficient of the optimal solution and that produced by the Greedy algorithm was at most 3.8%. The Lazy Greedy algorithm, on the other hand, returned the optimal solution in only 76.0% of all networks and the discrepancy went up to 63.6%, increasing the clustering coefficient to 0.03 instead of 0.05 in that particular case .r = 0.8 for the Greedy algorithm and r = 0.76 in case of the Lazy Greedy algorithm. If the links are added randomly, the increase in robustness is much smaller and the correlation between robustness and number of additional links drops to r = 0.54 Click here for additional data file."} +{"text": "Ostreid herpesvirus 1 species affects shellfish, contributing significantly to high economic losses during production. To counteract the threat related to mortality, there is a need for the development of novel point-of-care testing (POCT) that can be implemented in aquaculture production to prevent disease outbreaks. In this study, a simple, rapid and specific colorimetric loop-mediated isothermal amplification (LAMP) assay has been developed for the detection of Ostreid herpesvirus1 (OsHV-1) and its variants infecting Crassostrea gigas (C. gigas). The LAMP assay has been optimized to use hydroxynaphthol blue (HNB) for visual colorimetric distinction of positive and negative templates. The effect of an additional Tte UvrD helicase enzyme used in the reaction was also evaluated with an improved reaction time of 10 min. Additionally, this study provides a robust workflow for optimization of primers for uncultured viruses using designed target plasmid when DNA availability is limited.The Herpesvirales order comprises of three genetically distinct, linear double-stranded DNA (dsDNA) virus families including Herpesviridae, Alloherpesviridae and Malacoherpesviridae. These viruses share many characteristics including an icosahedral virion morphology [The rphology , a relatrphology and the rphology . Even surphology .Malacoherpesviridae family includes two genera of viruses, both of which infect mollusks: Ostreavirus and Aurivirus [Ostreavirus genus include OsHV-1 (and its \u00b5var variants), which primarily infect C. gigas [Acute Viral Necrobiotic Virus, which affects scallops such as Chlamys farreri [Aurivirus genera comprises of the Haliotid herpesvirus 1 named Abalone herpesvirus (AbHV) [The urivirus . The OstC. gigas ,7, and A farreri . The Aurs (AbHV) .C. gigas since 2004 [OsHV-1 and variants have been identified globally as the cause of significant and prolonged disease outbreaks in cultured nce 2004 . Once innce 2004 . Many conce 2004 ,13. Althnce 2004 .The current benchmark assay is based on PCR methods and is currently in use within the laboratory setting. The most commonly used conventional PCR primer set is C2/C6, designed on open reading frame (ORF) four, which covers a highly variable part of the genome, including the microsatellite locus ,16. The In particular, LAMP assays have grown in popularity in the last decade and already have been employed for the detection of bacteria, viruses and fungi in healthcare, agriculture and veterinary sectors . LAMP prC. gigas [Scapharca subcrenata [Thus far, two isothermal amplification primer sets have been developed to detect OsHV-1 from C. gigas or Scaphbcrenata ; howeverMultiple alignments indicated that the region between 419\u2013801 bp of ORF_4 was suitable for multi-strain detection, with the genetic differences of the ORF_4 gene among pairs of OsHV-1 variants ranging between 33.10\u201399.80% using pairwise comparison; therefore, the target was determined using ORF_4 sequences of OsHV-1 and \u00b5var variants.In silico evaluation of the three designed primer sets generated by the software included one pair of primers with stable DNA construct with no primer-dimer formation .The six designed primers indicated high specificity to detect OsHV-1 using basic local alignment search tool (BLAST) , with 10Evaluation of the primers by in silico hybridization indicated that the six primers annealed to distinct regions in 235 bp of the selected OsHV-1 target sequence: F1c (82\u2013103 bp), F2 (22\u201341 bp), F3 (1\u201320 bp), B1c (117\u2013138 bp), B2 (177\u2013195 bp) and B3 (216\u2013235 bp). Binding locations are indicated in Prior to optimization of the primers in lab settings, in silico verification of the primer binding sites to the artificially constructed virus-target plasmids was performed. The analysis indicated no complementarity to the AbHV plasmid A but sho4 and HNB in the performed reaction was crucial for visual assessment of the formed LAMP product, with 8 mM MgSO4 and 120 \u00b5M of HNB being the optimal concentrations in the designed LAMP assay was strongly dependent on a concentration of primers , with thtrations .The optimized assay for the designed LAMP primers using OsHV-1 DNA plasmid as a template showed that the amplified LAMP product yielded a maximum peak size formation between 90\u20131000 bp and thatAfter optimization of the reagent concentrations, the LAMP assay was also evaluated for determination of the optimal reaction temperature and time using HNB for a robust visual assessment. The colour change at the range of temperatures tested (63\u201365 \u00b0C) was consistent with the intensity of amplified OsHV-1 DNA, with 65 \u00b0C being the strongest . AfterwaAll positive samples were amp1\u2013105 copies/reaction using OsHV-1 plasmid as a template and visualized using the HNB assay to the LAMP assay resulted in colour change at t = 10 min, T = 65 \u00b0C with an intensive pink colour for positive and violet for a negative. In contrast, only the Bst DNA polymerase-based LAMP reaction did not show any colour changes after 15 min of amplification at T = 65 \u00b0C in comparison to the negative control . An asseemplates A.C. gigas populations by using it as an early warning tool in biosecurity protocols. In this study, the LAMP primers were designed based on a specific conserved OsHV-1 DNA region (ORF_4 gene encoding for DNA polymerase). Previously reported genome regions used for diagnostic of OsHV-1 include ORF_36_37_38, ORF_42_43, ORF_88, ORF_90, ORF_99, ORF_100 [Ostreavirus than other genes [Ostreavirus. The six designed LAMP primers bound into a selected target when assessed for in silico hybridization , as increase in yield LAMP product formation, suggesting specificity to the selected target. Moreover, no characteristic sharp peaks formed below the amplified LAMP product, indicating a lack of primer-dimer formation or cross-reaction with other targets. The sensitivity of the LAMP assay was assessed to be approximately 103 copies virus per reaction of OsHV-1 plasmid and two field samples as evaluated by Agilent 2200 Tape Station and HNB visualizationDespite the designed primers being in the recommended range of GC content (40\u201360%) and 2\u20134 GC clamp to bind specifically to the target , in silicis and trans priming of primers, which can lead to nonspecific detection and false positive/negative results [In contrast to other dyes , the use of HNB dye has been shown recently as a promising nontoxic alternative for colorimetric visualization of LAMP amplicons, which can be assessed by several methods including visual assessment and gel electrophoresis . Similar results . Thus, t results , althougEscherichia coli (E. coli)Tte UvrD helicase, to separate dsDNA with several benefits reported, such as unwind of dsDNA enzymatically, generation of single-stranded templates for primer hybridization, and subsequent extension or possibility to use the same optimized assays for detection of DNA or RNA [The HDA method was reported previously as an efficient POCT assay applicable in the field. HDA uses a DNA helicase i.e., A or RNA . In thisA or RNA . This faA or RNA . HoweverA or RNA . HoweverC. gigas is one of the main species produced in aquaculture, and its trade is of great economic importance and based both in hatchery produced spat and wild/naturalized animals. Until the OsHV-1 pandemic, C. gigas was supposed to be a very robust animal; however, the threat posed by OsHV-1 to this important production has challenged this sector. Thus, early warning tools, such as this robust one-tube LAMP assay, would be applicable in the field or under basic laboratory conditions allowing for fast decision-making regarding animal movements or relaying. In this study, applicability of the LAMP assay was tested only on three samples extracted from infected (L_1 and L_3) and uninfected (UN_1) oysters. However, such a tool could be extremely helpful both for producers and competent authorities in improving production performance and biosecurity in shellfish growing areas with further validation. Therefore, this LAMP assay should be further validated for usefulness in the field on a greater number of confirmed samples of OsHV-1 variants. Additionally, parallel double testing using qPCR and Agilent 2200 TapeStation would be useful to better understand the advantage offered by the addition of betaine in the LAMP assay and quantification possibilities of these two methods. This additional work could confirm the perspective use of the LAMP assay.Invertebrate animals are not included in the Directive 2010/63/EU of the European Parliament and of the Council of 22 September 2010 on the protection of animals used for scientific purposes. Thus, experimentation with oysters does not require approval by a research ethics committee.g) for 10 min, the supernatant was collected. Lysates were filter-sterilized using a 10-mL syringe barrel fitted with a 0.22-\u00b5m filter Millex\u00ae GV filter unit and maintained in a 30-mL Sterilin\u00ae universal container at \u221280 \u00b0C.Three oyster homogenate lysates were used in this study for testing of the applicability of the designed LAMP assay . The homg, 4 \u00b0C) for 6 h using the Beckman Sorral WX Ultra ultracentrifuge. After centrifugation, the supernatant was discarded and tubes were left to dry for 20 min. The pellet containing virions was resuspended with 1 mL of nuclease-free water and stored at \u221220 \u00b0C until use. Virus DNA was extracted using DNeasy\u00ae Blood and Tissue Kit , following the manufacturer\u2019s instructions. Extracted DNA was stored at \u221220 \u00b0C prior to use. DNA extracts were tested for concentration and quality using a Nanodrop or/and Agilent 2200 Tape Station and quantified with the method provided for the plasmid. For qPCR analysis, homogenate lysates were prepared with the method previously reported [Prior to sequencing, a volume of 1 mL of each homogenate lysate was mixed with 2 mL of 16% polyethylene glycol (PEG) and left at 4 \u00b0C overnight. After overnight incubation, propagated virion particles were concentrated and purified by ultra-centrifugation , FlongleThe obtained raw reads of the forward and reverse strands were paired into one single read list. The quality was enhanced by trimming off the low-quality reads using BBDuk tool, by correcting errors and by normalizing using BBnorm; chimeric and duplicate reads were removed using tools in Geneious Prime version 2020.2.2 . Corrected sequences were assembled using de novo Geneious/Flye assembler and placed into scaffolds. Assembled genomes were compared with OsHV-1 genomes available in GenBank using blastn tool in Geneious Prime version 2020.2.2 software.E. coli lac operon and synthesized . The number of OsHV-1 DNA copies in synthesized plasmid was calculated using following formula: number of copies = (amount \u00d7 6.022 \u00d7 1023)/(length \u00d7 1 \u00d7 109 \u00d7 650) estimated from amplified concentration of the template using Agilent 2200 Tape Station and from the molecular weight of the input OsHV-1 target for the concentration tested. Plasmids were sequenced using the Sanger method using universal primers M13 and digested using restriction enzymes for product size verification. Obtained plasmid sequences were verified for specificity to OsHV-1 and AbHV through comparison with genomes available in GenBank using BLAST [Synthetic DNA plasmids (5 \u00b5g) of two viruses (OsHV-1 with 3127 bp and AbHV with 3263 bp) were constructed based on a fragment of the ORF_4 gene in EcoRV cloning site and cloning vector Puc_57-BsaI-free. Lac promoter was used for the ng BLAST . The binhttps://primerexplorer.jp/e/v5_manual/index.html). The primer set was selected following guidelines for LAMP primers design and in silico optimization recommended by Lucigen\u00ae [Multiple alignment and pairwise comparison of 14 ORF_4 gene regions of OsHV-1 were performed using Geneious Prime 2020.2.2 for selection target region . PrimersLucigen\u00ae .To improve loop formation efficiency between FIP and BIP primers, four thymine (T4) linker spacer sequence was employed between the F1c and F2, and B1c and B2 regions.http://www.premierbiosoft.com/netprimer/). Each primer was assessed for specificity for detection) of OsHV-1 using BLAST [For all sets, the two outer primers (F3 and B3) were located outside the F2 and B2 regions. Designed primers were evaluated for secondary structures formation , primers dimerization (primer-dimer), %GC and GC clamp using Oligo Evaluator\u2122\u2014Sequence Analysis and NetPrimer\u2122 , Bst DNA polymerase large fragment (M0275S) , HNB dye 120 \u00b5M , primers FIP/BIP , F3/B3 , dNTPs mix , 99% Ultra Pure betaine , MgSOGermany) and 40 pThe LAMP assay was also optimized by testing different reaction temperatures for 60 min in a thermo-cycler and then heated to 80 \u00b0C for 5 min to terminate the reaction. At the optimal temperature, the reaction time was also evaluated and visualized using HNB dye.All samples were tested at least twice for confirmation.4 , 1\u00d7 ThermoPol reaction buffer 2SO4 and 0.1% Triton X-100), 320 U/mL of Bst DNA polymerase large fragment , Tte UvrD helicase 20 ng/mL HNB dye 120 \u00b5M and 2 \u03bcL of OsHV-1 DNA plasmid template (1 ng/\u03bcL). The reaction mixture was incubated 65 \u00b0C for 10 min in a thermo-cycler and then heated to 80 \u00b0C for 5 min to terminate the reaction. A volume of 2 \u00b5L of LAMP products was added into 2 \u00b5L of High Sensitivity D5000 buffer and visualized using Agilent 2200 Tape Station High Sensitivity D5000 DNA tape following manufacturers instruction. All samples were tested at least twice for confirmation.The total reaction volume of 25 \u03bcL contained 1.6 \u03bcM/\u00b5L of each FIP/BIP primer, 0.2 \u03bcM of each F3/B3 , 1.4 mM of dNTPs , 1 M of betaine , 8 mM of MgSOAfter optimization of conditions for the LAMP assay using designed primers and OsHV-1 plasmid, the limit of detection was performed to determine the sensitivity of the LAMP assay. Synthetic genomic DNA plasmid of OsHV-1 was 10-fold serially diluted in nuclease-free water in the range from 1 ng to 100 fg/\u00b5L and amplified as a template under the optimized LAMP assay conditions, following visualization using Agilent 2200 Tape Station and HNB dye.C. gigas homogenates, respectively, were analysed. All samples were tested at least twice for confirmation.To evaluate applicability of the LAMP assay, two positive samples (L_1 and L_3) and one negative sample (UN_1) containing DNA extracted from OsHV-1 contaminate and na\u00efve"} +{"text": "Neural stem cells (NSCs) offer a potential solution to treating brain tumors. This is because NSCs can circumvent the blood-brain barrier and migrate to areas of damage in the central nervous system, including tumors, stroke, and wound injuries. However, for successful clinical application of NSC treatment, a sufficient number of viable cells must reach the diseased or damaged area(s) in the brain, and evidence suggests that it may be affected by the paths the NSCs take through the brain, as well as the locations of tumors. To study the NSC migration in brain, we develop a mathematical model of therapeutic NSC migration towards brain tumor, that provides a low cost platform to investigate NSC treatment efficacy. Our model is an extension of the model developed in Rockne et al. that considers NSC migration in non-tumor bearing naive mouse brain. Here we modify the model in Rockne et al. in three ways: (i) we consider three-dimensional mouse brain geometry, (ii) we add chemotaxis to model the tumor-tropic nature of NSCs into tumor sites, and (iii) we model stochasticity of migration speed and chemosensitivity. The proposed model is used to study migration patterns of NSCs to sites of tumors for different injection strategies, in particular, intranasal and intracerebral delivery. We observe that intracerebral injection results in more NSCs arriving at the tumor site(s), but the relative fraction of NSCs depends on the location of injection relative to the target site(s). On the other hand, intranasal injection results in fewer NSCs at the tumor site, but yields a more even distribution of NSCs within and around the target tumor site(s). In addition, the California Institute for Regenerative Medicine (CIRM) is currently supporting preclinical investigations and clinical trials for development of NSCs for repair of damaged neural tissue associated with stroke, multiple sclerosis and other neurodegenerative diseases. Despite early successes and the promise of these emerging approaches, a major obstacle to further enhancing the efficacy of NSC-based therapy is ensuring that sufficient numbers of viable cells reach the diseased or damaged areas in the CNS. To accomplish this, we and others have explored intravenous (IV), intracranial (IC), and intranasal (IN) administration for delivery of NSCs to the CNS . Instead of the deterministic migration speed dw in \u03b2w \u2265 1. Note that we take \u03b1 = 1 so that \u03b2w = 1 will result in a uniform distribution, and we consider \u03b2w \u2265 1 so that more cells are likely to have small speed. This choice is due to the experimental results showing that some NSCs move relatively fast, but the majority of cells do not move much from the injection site.Stochasticity of NSC migration speed is modeled with a beta distribution, B for dw. The first parameter is fixed at 1 so that the distribution is skewed toward zero. This is done to reflect experimental results, which show that fewer cells travel rapidly, while most cells stay close to the injection site. To study the effect of this, we vary the second parameter \u03b2w \u2208 , however, as \u03b2w increases, the distribution will be skewed more towards zero. In addition, we rescale the values so that the average is consistent with the deterministic case, so that there will be more outliers that have large migration speed as we increase \u03b2w. The results of comparing \u03b2w = 1 and \u03b2w = 4 are shown in \u03b2w yields more NSCs that travel above a distance of 8,000 \u03bcm, while the median distance decreases. We comment that the effect of larger migration speed dg remains when stochasticity is added. For fixed \u03b2w = 1, increasing dg results in more NSCs migrating a longer distance arriving at the anterior commissure. Increasing \u03b2w with fixed dg and dw gives the same result.We modeled the maximum migration speed bution, B, \u03b2d for ter \u03b2w \u2208 . When \u03b2d3.1.2.\u03bbc = 6.In addition, we compare two tumor sites as shown in \u03bbc. As expected, approximately four times more NSCs arrive at the closer site (Tumor 1) as compared to the more distant site (Tumor 2). We also observe that increasing the chemosensitivity parameter \u03bbc substantially increases the proportion of NSCs that arrive at the cancer. When \u03bbc = 10 almost 85% arrive at cancer site 1 and above 20% arrive at cancer site 2, despite being further away. We comment that by adding stochasticity to the chemosensitivity parameter, the arrival at the cancer site reduces significantly especially for smaller values of \u03b1c. As the stochastic parameter increases as \u03b1c \u2265 4, the results approach the deterministic model with the same chemosensitivity value.3.2.In this section, we simulate the model on a three-dimensional mouse brain, and focus on comparing two different injection strategies, intracerebral and intranasal injection. Intracerebral administration of NSCs injects cells directly into the brain, which is one of the most direct methods of drug delivery to the target site since it bypasses the blood-brain barrier and other mechanisms that limit drug distribution. However, this method is invasive such that it requires opening the skull, and also the wound from the injection needle can cause a hostile environment for the NSCs to survive. An alternative method is intranasal administration that insufflates the drug through the nose. The therapeutic agents are then transported through the nasal cavity to the olfactory epithelium that covers the upper part, before moving to the olfactory bulb which provides a direct connection between the brain and its external environment. The advantage of intranasal injection over intracerebral is it\u2019s non-invasive nature of administration while similarly bypassing the blood-brain barrier to deliver the drug agents. Also, the possibility of repeated treatment is another major advantage of intranasal administration over intracerebral administration, whereas intracerebral administration can be given only once.5 NSCs were administered, and the brains were harvested 2\u20133 days after NSC administration, respectively. The NSCs at cancer sites were quantified by estimating the number of NSC clusters. As presented in 5 NSCs were administered intranasally. In this experiment, the number of NSCs at cancer site is estimated as 3,000\u20137,000 cells. This corresponds to arrival percentage of 0.55 \u00b1 0.16% on day 1 and 1.25% on day 4. Although the two experiments are not controlled to be directly comparable, they provide an idea about the efficacy of intracerebral and intranasal administration of NSCs. In addition, we comment that our simulation results of percentage of cancer arrival are overestimated due to not considering the fraction NSCs dying at the injection site. While we assume that all NSCs survive and migrate inside the brain, in reality, only 10\u201320% of NSC are known to survive after injection.In \u03bcm per pixel. We take the mouse DTI from with siz\u03bcm. We consider the following three scenarios with different locations and numbers of cancer as follows:Case 1: One cancer site on the front side of right putamenCase 2: One cancer site on the rear side of right putamenCase 3: Two cancer sites on the left and right putamenLet us study the two injection methods for brain with glioma. The size of the tumor is chosen to be 200 \u00d7 200 \u00d7 800 xc = . The movement of NSCs throughout the brain is observed using both methods of injection, intranasal and intracerebral. With this framework for study set in place, \u03bcm from the injection site within three days, which agrees with the distance to the cancer site. On the other hand, the intranasal NSCs seem to spread out gradually from their initial starting point, with some outliers most likely indicating those few cells that travel much farther and manage to reach the white matter deeper in the brain. This contrast is due to the cancer site being directly connected to the intracerebral injection site via a white matter tract which functions as the shortest path that the NSCs traverse. In We begin our simulations with 1,000 NSCs for the case of cancer site centered at the front side of the right putamen, xc = . Compared to the first simulation, this cancer location is further away from both injection sites making NSC treatment more challenging. The second simulation with cancer present places the cancer site centered at the rear side of the right putamen, xc = and the other centered at the middle of the left putamen xc = . We observe that in intranasal injection, the NSC that took the shortest path to reach either cancer travels directly through the olfactory bulb to the cancer sites. This may be due to the closer location of the tumor from the injection site that some of the NSC can migrate with chemotactic driving force. On the other hand, the intracerebrally injected NSC has the advantage of being much closer to the cancer site on the right putamen, while the cancer site on the left putamen can be accessed through crossing the white matter tract to the other side of the brain. The direct result of this advantage can be seen in The third simulation has two cancer sites present, one centered at the front side of the right putamen, = 400, 30, 155. W4.In this paper, we develop an agent based model of therapeutic neural stem cell migration in a three-dimensional mouse brain with and without glioma brain tumor. As an extension to the original two-dimensional model in Rockne et al. (2018), our model allows us to examine the neural stem cell migration in a three-dimensional brain that elevates the potential usage of in silico simulation. In addition, the effect of uPA is added to model the chemotactic behavior of the neural stem cells, which makes them a promising therapeutic agent for cancer treatment. Finally, the stochasticity regarding the migration speed and sensitivity to chemotaxis is modeled with stochastic parameters. The effect of these added parameters on the migration distance, percentage in white matter tracts, and arrival percentage at the cancer sites are studied.Using our model, we examine the efficacy of NSC treatment for different cancer locations and different injection strategies. In particular, we focus on comparing intranasal and intracerebral injections. Intranasal drug delivery provides an alternative and effective strategy to intracerebral injection which is more invasive. We compare the migration pattern of neural stem cells and tumor arrival rates of the two injection strategies. Considering the fraction of NSCs to arrive among injected NSCs, intracerebral injection is more effective due to its closer distance to the cancer site and injection location being on the white matter tract. However, due to such strong dependency on the injection location, when multiple cancer sites exist, the NSCs are concentrated at the nearest site which makes the distribution of NSCs less uniform compared to the intranasal injection. Although intranasal injection show a smaller arrival rate compared to intracerebral injection, NSCs are still able to follow the white matter tract all across the brain and a considerable amount of NSCs reach the cancer site. Moreover, when two tumor sites are located on opposite sides of the brain, intranasal injection yields a more even distribution compared to intracerebral injection. Considering that repeated administration is possible in intranasal delivery, our simulations supports the efficacy of intranasal delivery of NSC treatment, especially when there are multiple tumor sites across the brain.Future work includes calibrating the model to experimental data and conducting parameter sensitivity analysis more carefully. We also plan to build on our model inspired by various cell migration modeling approaches and improve upon it. As discussed in"} +{"text": "Erectile dysfunction is a common adverse effect of external beam radiation therapy for localized prostate cancer (PCa), likely as a result of damage to neural and vascular tissue. Magnetic resonance-guided online adaptive radiotherapy (MRgRT) enables high-resolution MR imaging and paves the way for neurovascular-sparing approaches, potentially lowering erectile dysfunction after radiotherapy for PCa. The aim of this study was to assess the planning feasibility of neurovascular-sparing MRgRT for localized PCa.. Dose constraints for the neurovascular bundle (NVB), the internal pudendal artery (IPA), the corpus cavernosum (CC), and the penile bulb (PB) were established. Doses to regions of interest were compared between the neurovascular-sparing plans and the standard clinical pre-treatment plans.Twenty consecutive localized PCa patients, treated with standard 5\u00d77.25\u00a0Gy MRgRT, were included. For these patients, neurovascular-sparing 5\u00d77.25\u00a0Gy MRgRT plans were generatedNeurovascular-sparing constraints for the CC, and PB were met in all 20 patients. For the IPA, constraints were met in 19 (95%) patients bilaterally and 1 (5%) patient unilaterally. Constraints for the NVB were met in 8 (40%) patients bilaterally, in 8 (40%) patients unilaterally, and were not met in 4 (20%) patients. NVB constraints were not met when gross tumor volume (GTV) was located dorsolaterally in the prostate. Dose to the NVB, IPA, and CC was significantly lower in the neurovascular-sparing plans.Neurovascular-sparing MRgRT for localized PCa is feasible in the planning setting. The extent of NVB sparing largely depends on the patient\u2019s GTV location in relation to the NVB. In patients treated with stereotactic body radiation therapy (SBRT), ED rates range from 26% to 55% at 60\u00a0months in previously sexually functioning patients Neurovascular-sparing radiotherapy for erectile function preservation has been proposed before Due to the movement of the pelvic organs, daily plan optimization is desirable for neurovascular-sparing radiotherapy. However, NVBs and IPAs cannot be adequately identified on CT due to lack of contrast. MRI allows better visualization of these structures To date, no study has examined the planning feasibility of neurovascular-sparing MRgRT for localized PCa. Therefore, in this study we aimed to assess the feasibility of treatment planning for neurovascular-sparing MRgRT for localized PCa and the potential dose reduction to neurovascular structures.22.13): 0.8/0.8/2.0) for optimal contouring of target volumes and OAR. Patients signed informed consent for sharing of their clinical data within the MOMENTUM study (NCT04075305), which was approved by our institutional review board For this planning study, 20 consecutive patients with localized low- to high-risk PCa risk categories) without extracapsular extent were included, to account for the variation in tumor location and anatomy of the localized PCa population. All patients were previously treated with standard 5\u00d77.25\u00a0Gy MRgRT on a Unity MR-Linac. In preparation for treatment on the MR-Linac patients received a pre-treatment multiparametric (mp) 3T offline planning MRI (T2-weighted and diffusion weighted imaging (DWI) sequences; reconstructed resolution and a radiation biologist . For neurovascular tissue an EQD2 \u03b1/\u03b2 of 2.0\u00a0Gy and for vascular tissue an \u03b1/\u03b2 of 3.0\u00a0Gy was applied The GTV\u00a0+\u00a04\u00a0mm included the GTV (mpMRI visible tumor(s)) with a 4\u00a0mm isotropic margin excluding the rectum and bladder. The CTV included the GTV\u00a0+\u00a04\u00a0mm and prostate body with the base of the seminal vesicles and the PTV included the CTV with a 5\u00a0mm isotropic margin. Dose prescriptions for the PTV were adapted to allow neurovascular-sparing MRgRT. The GTV\u00a0+\u00a04\u00a0mm should receive 34.4\u00a0Gy in \u2265 99% and the PTV 30.0\u00a0Gy in \u2265 99%; 32.6\u00a0Gy in \u2265 90% and 34.4\u00a0Gy in \u2265 80%. Because of the proximity of the NVB to the prostate and the priority of dose coverage of the GTV\u00a0+\u00a04\u00a0mm and PTV, we set the NVB dose constraint as \u201csoft\u201d constraint (i.e. not mandatory). The applied dose constraints for neurovascular-sparing 5\u00d77.25\u00a0Gy MRgRT are displayed in 2.3For each patient the left and right NVB, IPA, CC, and the PB were contoured on the pre-treatment offline 3T T2-weighted planning MRI. Contouring was done by a single prostate specialized radiation oncologist (JVZ) with 10\u00a0years of clinical experience, using the in-house developed contouring software package Volumetool and contours were added to the standard planning contour set that was previously contoured by the treating radiation oncologist. The NVB was contoured from at least the base of the seminal vesicles until the level of the urogenital diaphragm . On T2-w2 and the minimum number of motor units was 5 with a maximum of 60 segments. No plan renormalization was used. During treatment the patient is supported by a soft pillow under the head and knee supporters under the feet. All settings were identical to the standard 5\u00d77.25\u00a0Gy MRgRT at our institution.The planning MRI including contour set was imported into the treatment planning software Monaco 5.40.01 , to generate intensity modulated radiation therapy (IMRT) offline treatment plans for the Unity MR-Linac. Bulk relative electron density value of 1 was assigned to the body and values for the femoral heads and other bony structures were calculated using the average Hounsfield units of a matched CT scan. Seven-field IMRT technique was used . The calculation grid spacing was 3\u00a0mm with a statistical uncertainty of 3% per control point and < 1% per voxel. The minimum segment width was 0.5\u00a0cm and area 1.5\u00a0cmFor neurovascular-sparing treatment planning, GTV + 4\u00a0mm and PTV coverage was the primary goal, secondary, meeting the conventional OAR constraints, and tertiary, meeting the neurovascular structures constraints. In case the neurovascular-sparing constraints could not be met, a dose as low as reasonably achievable (ALARA) was pursued. The planning was done under supervision of a radiation therapist specialized in treatment planning (JH) with 10\u00a0years of clinical experience and all plans were evaluated by a prostate specialized radiation oncologists (JVZ).2.4In a next step, we compared the neurovascular-sparing plans with the standard (i.e. non-neurovascular-sparing) pre-treatment plans. For all 20 patients, the matched neurovascular-sparing contour set including the NVBs, IPAs, CCs, and PB was registered to the actual clinical pre-treatment plan that was generated before to the MR-Linac treatment, using Monaco 5.40.01. Planned dose to the target volumes, conventional OAR, and neurovascular structures as would have been received in the standard planning setting were calculated in Monaco.2.5R version 4.0.5 was used for the statistical analysis. Pairwise Wilcoxon signed rank tests with Bonferroni correction for multiple testing were performed to compare the neurovascular-sparing planned dose with the standard planned dose. Furthermore, the NVBs were stratified between those that did and did not meet the dose constraint in the neurovascular-sparing plans. Population-median dose volume histogram (DVH) curves were generated using the R package \u201cdvhmetrics\u201d. Non-normally distributed data were presented as median with range and p-value of < 0.05 was considered statistically significant.3All 20 patients\u2019 treatment plans were considered clinically acceptable. Prescribed dose coverage of the GTV + 4\u00a0mm and PTV was achieved for neurovascular-sparing plans and the neurovascular-sparing dose constraints for the CC and PB were met in all patients . The dosThe comparison of the neurovascular-sparing plans with the standard plans is presented in 4This study is the first to demonstrate that neurovascular-sparing MRgRT for localized PCa is feasible in the planning setting. Predefined constraints for the CC and PB were met in all 20 patients, for the IPA in 19 (95%) patients bilaterally and 1 (5%) unilaterally, and for the NVB in 8 (40%) patients bilaterally and in 8 (40%) patients unilaterally. Dose to the NVB, IPA, and CC was reduced significantly, without substantially increasing dose to the bladder, rectum, and sphincter.In all cases where the GTV was located in the dorsolateral position of the prostate and therefore the GTV + 4\u00a0mm directly bordering or partially overlapping the NVB, the NVB constraint could not be met . NeverthAlthough it is hypothesized that MRgRT offers major advantages in terms of erectile function sparing treatment because of the ability to adequately visualize the neurovascular structures and correct for interfraction and intrafraction motion and deformation, others have initiated studies on erectile function sparing radiotherapy on conventional linacs. Spratt et al. conducted a single arm study in which 135 men with an IIEF-5 score of \u2265 16 at baseline underwent IPA and CC sparing radiotherapy and reported an erectile function preservation rate of 67% (i.e. IIEF-5\u00a0\u2265\u00a016) at 5\u00a0years after treatment The currently ongoing POTEN-C trial takes erectile function sparing EBRT a step further by conducting a randomized controlled trial randomizing 120 low- to intermediate-risk patients between 5 fraction SBRT with or without sparing of the NVBs, IPAs and PB on a conventional linac There are some considerations for our study. First, it is unknown to what extent radiation damage to each individual neural or vascular structure contributes to ED after radiotherapy. In literature the NVBs, IPAs, CCs, and PB are generally described as the most important structures contributing to radical PCa treatment-induced ED To assess the effect of neurovascular-sparing treatment, we initiated a single arm phase II trial (NCT04861194) The RATING guidelines for treatment planning were used for preparing the manuscript In conclusion, neurovascular-sparing MRgRT for localized PCa is feasible in the planning setting. Dose to the neurovascular structures can be reduced substantially. The extent of neurovascular-sparing largely depends on the patient\u2019s GTV location.ZonMW IMDI/LSH-TKI Foundation , 10.13039/100011676Elekta AB , and Philips Medical Systems . The funding sources had no involvement in the design of the study, the collection, analysis, and interpretation of the data, nor in the writing and decision to submit the article for publication.This research has been partly funded by The authors declare the following financial interests/personal relationships which may be considered as potential competing interests: HV receives research funding from Elekta. The remaining authors declare no potential competing interests."} +{"text": "Given the frequent initiation of antibacterial treatment at home by caregivers of children under five years in low-income countries, there is a need to find out whether caregivers\u2019 reports of prior antibacterial intake by their children before being brought to the healthcare facility are accurate. The aim of this study was to describe and validate caregivers\u2019 reported use of antibacterials by their children prior to seeking care at the healthcare facility.A cross sectional study was conducted among children under five years seeking care at healthcare facilities in Gulu district, northern Uganda. Using a researcher administered questionnaire, data were obtained from caregivers regarding reported prior antibacterial intake in their children. These reports were validated by comparing them to common antibacterial agents detected in blood and urine samples from the children using liquid chromatography with tandem mass spectrometry (LC-MS/MS) methods.A total of 355 study participants had a complete set of data on prior antibacterial use collected using both self-report and LC-MS/MS. Of the caregivers, 14.4% reported giving children antibacterials prior to visiting the healthcare facility. However, LC-MS/MS detected antibacterials in blood and urine samples in 63.7% of the children. The most common antibacterials detected from the laboratory analysis were cotrimoxazole , ciprofloxacin , and metronidazole . The sensitivity, specificity, positive predictive value (PPV), negative predictive value and agreement of self-reported antibacterial intake prior to healthcare facility visit were 17.3% (12.6\u201322.8), 90.7% (84.3\u201395.1), 76.5% (62.5\u201387.2), 38.5% (33.0\u201344.2) and 43.9% (k 0.06) respectively.There is low validity of caregivers\u2019 reports on prior intake of antibacterials by these children. There is need for further research to understand the factors associated with under reporting of prior antibacterial use. Antibacterial agents are used to treat a wide range of bacterial infections and are essential lifesaving medicines. They are the most commonly used medicines in Sub-Saharan Africa due to the high prevalence of infectious diseases . Used coAntibacterials are, according to the national drug policy of Uganda, prescription only medicines . HoweverCaregivers\u2019 ability to report antibacterial intake prior to coming to a health facility is crucial for appropriate prescription of medicines at the health facility. Self-reports have been shown to have low validity as they are prone to recall bias and social desirability bias. Respondents normally provide information that conforms to their perceived expectations of the health workers or researchers , 8. A stTo our knowledge no study has validated caregivers\u2019 reports of intake of antibacterials in children under five years in rural communities in low resource settings. In this study we describe and validate caregivers\u2019 reported use of antibacterials by their children under five years for treatment prior to seeking care at the healthcare facility.The protocol was reviewed and approved by the Makerere University School of Biomedical Sciences Research and Ethics Committee (reference SBS-570) and the Uganda National Council of Science and Technology (reference HS235ES) . AdminisA cross-sectional study was conducted among children under five years and their caregivers in healthcare facilities in Gulu district, northern Uganda. Gulu is located about 360 km from the capital city Kampala. In Uganda, the lowest level of the district-based healthcare system consists of the village health teams/community medicine distributors, which constitute level 1 of health care. This is operated by members of the community who can read and write at least in the local language of the community. The next level is health centre II which is operated by a professionally trained nurse with a diploma and is intended to serve 5000 patients. This is followed by health centre level III which is operated by a professionally trained clinical officer with a diploma in clinical medicine and intended to serve 10,000 patients. Above health centre level III is health centre level IV and then district hospitals headed by medical officers with a basic degree in medicine and surgery and intended to serve about 100,000 patients. Regionally there are regional referral hospitals where patients are referred to from the district hospitals. The regional referral hospitals are expected to have specialist health professionals covering the major disciplines such as surgery, internal medicine, and paediatrics. At the top of the health care system are the national referral hospitals . Gulu diSick children under five years and their caregivers seeking care at the four healthcare facilities were included in the study after caregivers\u2019 consent. Children who were brought to the health center by caregivers who did not take care of the children from the onset of the current illness were excluded from the study. Children who had come for review or continuation of treatment for current illness were also excluded from the study.The sample size was computed based on formula for estimation of sample size for a single proportion . AssuminThe patients were selected by systematic random sampling. On each of the data collection dates the first patient to be recruited into the study was randomly selected by having a blindfolded data collector walk around in the waiting area and point at a random patient among the patients waiting in line to be seen by the healthcare worker in the outpatient department. Thereafter, every fourth patient in line towards the entrance of the healthcare workers room was selected for recruitment. In the event that the selected patient was above five years of age, they were skipped and the next patient recruited while maintaining the sampling interval. Approximately 10 days were spent collecting data in each healthcare facility.An interviewer administered questionnaire was used for data collection. The questionnaire was pre-tested on caregivers of 30 children in outpatient departments of Gulu regional referral hospital. This tool was adapted from a tool used to collect data on prevalence and predictors of prior antibacterial use among patients presenting to hospitals in northern Uganda in a previous study , it was The data collection team was divided into four groups each comprising of two people, one pharmacy technician and a laboratory technician. The pharmacy technician conducted interviews while the laboratory technician collected the blood and urine samples.Information on the following variables was collected; sub-county of residence, age of child, age of care-giver, sex of child, sex of caregiver, whether medication was given to child before coming to the healthcare facility since the onset of this current illness, the type and source of the medicine, and the person who recommended the medicine. In case the caregiver did not know the name of the medicine, the interviewer asked them to describe it or show the packing material if at all they had come with it to the health center. Each interview lasted about 20 minutes per patient.Two hundred microlitres (200\u03bcL) of blood was collected from the fingertips of children under five years using a 200\u03bcL micro-pipette with ethylenediamine tetra-acetic acid (EDTA), and spotted on a filter paper and left to dry for 3 hours in room temperature. After the blood had dried on the filter paper, each filter paper was put in a separate plastic zip bag with a desiccant and transported to the laboratory for analysis.Urine samples were collected in sterile wide mouth containers. In the very young children who couldn\u2019t void in the wide mouth containers, urine samples were collected by placing a thick layer of cotton wool inside the child\u2019s nappy and squeezing the urine in the urine sample bottles. Two hundred microlitres (200\u03bcL) of urine was collected from the wide mouth containers using a plastic pipette and spotted on a filter paper and left to dry for 3 hours at room temperature. After the urine had dried on the filter paper, each filter paper was put in a separate plastic zip bag with a desiccant and transported to the laboratory for analysis. The dried blood spot (DBS) and dried urine spot (DUS) samples obtained from patients were stored at -20\u00b0C and -80\u00b0C respectively until analysis.The whole diameter disk (containing 200\u03bcl of blood or urine) was cut out from each DBS and DUS. The cut disc was placed in an Eppendorf tube (1.5 mL capacity) and mixed with 1000 \u03bcL of methanol (20%) and acetonitrile (80%). The sample was vortex-mixed twice for 20 s at 10-min intervals and then centrifuged at 3500 revolutions per minute (RPM) for 5 minutes. After the extraction period, the filter paper was removed, and 500 \u03bcL of the extract was transferred into an auto-sampler vial to be injected onto the LC-MS/MS system for analysis.A simple, fast, sensitive and selective qualitative LC-MS/MS method for identification of fifteen (15) antibacterials in DBS and DUS was used for analysis . The limData on key pharmacokinetics properties that may have affected the interpretation of our results, have been presented in the supporting information section , and theDouble data entry was done using Epi-Data 3.1 software for both the questionnaire and laboratory data. The two datasets were reconciled by comparing them for each field in the questionnaire and laboratory result, in case of any discrepancies, the corresponding questionnaire or patient laboratory record was checked to establish the correct entry. Data were then imported into Stata 14/IC for analysis.Descriptive statistics were presented using median and interquartile range (IQR) for continuous variables or frequencies and proportions for categorical variables. The dependent variables, treatment of child with antibacterials prior to healthcare facility visit as reported by their caregiver and detectable antibacterials in DBS or DUS samples, were summarized as proportions. In order to adjust for potential biases associated with point estimates from the sampling design, we used svy commands in stata to compute proportions and respective 95% confidence intervals. Pearson\u2019s chi-square test was used to assess associations for the categorical variables. In order to validate caregivers\u2019 reported use of antibacterials, sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), prevalence, agreement and kappa coefficient were calculated. Laboratory results for detection of antibacterials in dry blood spot or dry urine spot samples were considered as the gold standard and caregivers\u2019 reports of use of antibacterials prior to health facility visit were considered as the test results.Of the 385 sampled children, 355 (92.2%) had data on both caregiver\u2019s report on antibacterial use prior to health facility visit and results from urine and blood analysis and were thus included in the analysis. The 30 (7.8%) observations were dropped because they were missing blood analysis data. Over half of the children were female. The median age of the children was 29 (IQR: 16\u201346) months. The majority of the caregivers were female. The median age of the caregivers was 25 (IQR: 21\u201331) years. About half of the children attended a healthcare facility located in a rural area. .Out of the 355 children under five years who were included in the analysis, 51 were reported by the caregivers to have been treated with antibacterials prior to coming to the healthcare facility. Of these 51 children, the prevalence of antibacterial use was higher in those from urban areas and in those who got antibacterials from public health facilities .Of the 355 children under five years who were included in the analysis, 226 had detectable levels of antibacterials in urine or blood in the samples taken upon arrival to the healthcare facility .The most commonly used antibacterials as reported by the care givers were amoxicillin , cotrimoxazole , and metronidazole . The most common antibacterials detected from the laboratory analysis were cotrimoxazole , ciprofloxacin , and metronidazole The sensitivity, specificity, PPV, NPV, agreement and kappa coefficient of the caregivers\u2019 reports of use of antibacterials for treatment of children prior to healthcare facilities visit were 17.3% (12.6\u201322.8), 90.7% (84.3\u201395.1), 76.5% (62.5\u201387.2), 38.5% (33.0\u201344.2), 43.9% (38.7\u201349.3%) and 0.06 (0.01\u20130.12) respectively. The sensitivity, specificity, PPV,NPV, agreement and kappa coefficient varied between the different antibacterials see .In this study we demonstrated that the prevalence of antibacterial use prior to health facility visit was high and that caregivers under reported the use of antibacterials in the children under five years prior to coming to the health facility. Antibacterial use prior to healthcare facility visit is a common practice in many resource limited settings globally. Caregivers\u2019 ability to report antibacterial use before coming to the health facility is crucial for appropriate prescription of antibacterial upon reaching health facilities . ApproprV. cholerae was highly resistant to the commonly used antibiotics [In the current study, almost two thirds (63.7%) of the samples (blood and/or urine) tested positive for antibacterials. This implies that the prevalence of antibacterial use prior to health facility visit is much higher than what was self-reported 14.4%). This finding is similar to those from other low and middle income countries (LMIC) .4%. This, 10, a sibiotics .Most caregivers reported to have given their children amoxicillin, cotrimoxazole and metronidazole. This is consistent with reports from a study in northern Uganda where metronidazole, amoxicillin, ciprofloxacin, doxycycline or cotrimoxazole were reported as the most commonly used antibacterials by patients prior to hospital visit . MetroniThe positive predicative value we found for reported use of antibacterials is not high enough to allow caregivers reports to guide treatment. The high specificity values indicate under reporting but the negative predictive value indicate that many children were given drugs that were not reported by caregivers. This study was carried out in rural communities of Gulu district in Uganda where the adult literacy levels are low , 20, andWe observed a strong association between high self-reported prior antibacterial use and the source of antibacterials being from public health facilities. This could be attributed to the low financial status of the people in these communities forcing There is need for further research to understand the reasons for caregivers\u2019 poor reports on their children\u2019s prior intake of antibacterials before coming to the health facility. Improved validity could be promoted by encouraging health care workers to carefully explain to the caregivers the medicines they administer to these children when they fall sick. Proper documentation of the medicines given to these children when they are sick could also improve the validity of self-reported medicine use. There is need for the healthcare workers to educate the caregivers about the dangers of using antibacterials without consulting a healthcare worker, and also further research is required to better understand why caregivers initiate antibacterial use at home without consulting a healthcare service provider. This all will allow policy makers to be better informed when planning interventions to reduce the large amount of incorrect antibacterial use in the community.The results of our study should be considered in light of some limitations. This study could have been affected by recall bias, where antibacterials given may have been forgotten. The study could have also been affected by social desirability bias since the study was carried out in a hospital setting and probably caregivers feared telling the truth because they thought it could affect patient care. Under reporting could have been affected by how the questions were understood by the caregivers. Failure to detect some of the antibacterials in the samples could have been due to the pharmacokinetics of the antibacterials. Factors such as education level of the caregivers could have contributed to the under reporting of antibacterial use prior to healthcare facility visit, however, we didn\u2019t collect this information. This is because adult literacy levels in this community are low , 20 and A high proportion of children under five years take antibacterials prior to visiting a healthcare facility in northern Uganda. However, there is low validity of caregivers\u2019 reports on prior intake of antibacterials by these children. There is need for further research to understand the factors associated with under reporting of prior antibacterial medicine use by caregivers of children under five years. In addition, we suggest that health care workers should endeavor to explain the role and names of medicines during dispensing, as well as the importance of reporting correctly on prior medication intake. There is also need to educate the caregivers about the dangers of using antibacterials without consulting a healthcare worker, and also further research is required to better understand why caregivers initiate antibacterial use at home without consulting a healthcare service provider. This all will allow policy makers to be better informed when planning interventions to reduce the large amount of incorrect antibacterial use in the community.S1 Table(DOCX)Click here for additional data file.S1 Appendix(DOCX)Click here for additional data file.S2 Appendix(DOCX)Click here for additional data file.S3 Appendix(DOCX)Click here for additional data file.S4 Appendix(DOCX)Click here for additional data file."} +{"text": "Objectives: This study aimed to investigate the effect of nurse-led, goal-directed lung physiotherapy (GDLPT) on the prognosis of older patients with sepsis caused by pneumonia in the intensive care unit.Methods: We conducted a prospective, two-phase (before-and-after) study over 3 years called the GDLPT study. All patients received standard lung therapy for sepsis caused by pneumonia and patients in phase 2 also received GDLPT. In this study, 253 older patients (age \u2265 65 years) with sepsis and pneumonia were retrospectively analyzed. The main outcome was 28 day mortality.Results: Among 742 patients with sepsis, 253 older patients with pneumonia were divided into the control group and the treatment group. Patients in the treatment group had a significantly shorter duration of mechanical ventilation , and a lower risk of intensive care unit (ICU) mortality and 28 day mortality compared with those in the control group. GDLPT was an independent risk factor for 28 day mortality .Conclusions: Nurse-led GDLPT shortens the duration of mechanical ventilation, decreases ICU and 28-day mortality, and improves the prognosis of older patients with sepsis and pneumonia in the ICU. Aging of the population is a critical worldwide trend. The proportion of individuals older than 60 years has tripled over the last 50 years and will triple again before 2050. This aging has major consequences on the health system, including the intensive care unit (ICU). In the USA, almost half 48.7%) of the patients admitted to an ICU are aged 65 years or older, and patients aged 85 years or older account for 7 to 25% of the admission rate , 2. The 8.7% of tBecause of age-related changes in the body, comorbidity, and malnutrition, the onset of pneumonia is insidious, rapid, and critical, and clinical treatment is difficult in older patients , 7, 8. Lchictr.org.cn (identifier: ChiCTR-ROC-17010750).The GDLPT study was a prospective, two-phase (before-and-after) study conducted in Peking Union Medical College Hospital of China from January 2017 to January 2020. Details of this study, including the inclusion and exclusion criteria, have been published previously . The stuThe protocols for pneumonia in the pre- and post-protocol groups are shown in Baseline data of the patients, including sex, age, underlying diseases, the Sequential Organ Failure Assessment (SOFA) score, the Acute Physiology and Chronic Health Evaluation (APACHE) II score, and the Clinical Pulmonary Infection Score (CPIS), were analyzed. Vital signs, laboratory parameters, arterial blood gases, ventilatory parameters, life-sustaining treatments, and infection-related data during admission were also included. The data used in this study were the worst values in the first 24 h after ICU admission. The primary outcomes of this study were 28 day mortality. The secondary outcomes were the duration of ventilation, the length of the ICU stay, and the ICU mortality rate.t test or one-way analysis of variance for continuous variables. Logistic regression analysis was performed with 28 day mortality as the dependent factor, which was significant (p < 0.2) in univariate analysis.Data were analyzed by IBM SPSS version 21.0 . The frequency and percentage and mean and standard deviation were calculated for descriptive statistics. Bivariate analysis was performed using the chi-square test for categorical variables and the Among 742 patients with sepsis, 253 patients who were older than 65 years were included in this study. P = 0.045], and a lower ICU mortality and 28 day mortality compared with those in the control group.There was no significant difference in the ICU duration between the groups. Patients in the treatment group had a significantly shorter duration of mechanical ventilation , heart failure , tumor , need for renal replacement therapy , and mechanical ventilation days . GDLPT was a protective factor for 28 day mortality , 0.379; 95% confidence interval (CI), 0.187\u20130.766; ortality .Pneumonia is frequently encountered in older patients admitted to the ICU, with an incidence rate of >60% in those with sepsis , 13, 19.With aging, there are significant changes in the anatomical and physiological function of the lungs. Older people are more likely to develop pulmonary complications and underlying diseases, including COPD, aspiration, pneumonia, tumor, heart failure, chronic renal failure, and have a poor prognosis . The GDLIn this nurse-led GDLPT study, pain assessment, delirium assessment, and active early activities were carried out every 6 h by nurses. Delirium varies from 11 to 42% in older patients and has adverse outcomes and high health care costs . Up to 6In this study, nurse-led GDLPT significantly shortened the duration of ventilation. Prolonged mechanical ventilation might result in an increased incidence of ventilator-associated pneumonia, which is clinically meaningful, especially for older patients with pneumonia. A high frequency of antibiotic use and antibiotic resistance in patients is caused by ventilator-associated pneumonia. The prevalence of multidrug resistance is increasing, and ventilator-associated pneumonia caused by multidrug resistance is often fatal in the ICU . SeveralAt present, older patients with a critical illness are frequently admitted in the ICU. However, data on older patients with pneumonia are relatively rare, and there have been few evidence-based medical reports . This stManaging pneumonia in critically ill older patients is a complex issue. Aging, comorbidities, frailty, and other factors in older patients significantly increase the management requirements and risks for those with critical illness. Nurse-led GDLPT significantly shortens the duration of ventilation and improves the 28 day mortality rate in older patients with sepsis and pneumonia. Nurse-led GDLPT is a new clinical intervention for the refined management of older patients with pneumonia, and it promotes the recovery of older patients with severe pneumonia.The original contributions presented in the study are included in the article/chictr.org.cn (identifier: ChiCTR1900025850). The patients/participants provided their written informed consent to participate in this study.The studies involving human participants were reviewed and approved by the Institutional Review Board of Peking Union Medical College Hospital (No. JS-1170). The study was registered at NC, QL, HW, and JS contributed to the conception of the study, data interpretation, and drafted the manuscript. WH, HL, and MZ contributed to the data collection and data analysis. WC and ZL contributed to data collection and interpretation and critically revised the manuscript for important intellectual content. All authors approved the final version of the manuscript.The work was supported by National Natural Science Foundation of China (No. 82072226), Beijing Municipal Science and Technology Commission (No. Z201100005520049), Fundamental Research Funds for the Central Universities (No. 3332021018), Non-profit Central Research Institute Fund of Chinese Academy of Medical Sciences (No. 2019XK320040), Tibet Natural Science Foundation (No. XZ2019ZR-ZY12(Z)), and Excellence Program of Key Clinical Specialty of Beijing in 2020 (No. ZK128001).The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher."} +{"text": "However, how cells prevent PIP2 from accumulating in intracellular membrane\u00a0compartments, despite constant intermixing and exchange of lipid membranes, is poorly understood. Using the C. elegans early embryo as our model system, we show that the evolutionarily conserved lipid transfer proteins, PDZD-8 and TEX-2, act together with the PIP2 phosphatases, OCRL-1 and UNC-26/synaptojanin, to prevent the build-up of PIP2 on endosomal membranes. In the absence of these four proteins, large amounts of PIP2 accumulate on endosomes, leading to embryonic lethality due to ectopic recruitment of proteins involved in actomyosin contractility. PDZD-8 localizes to the endoplasmic reticulum and regulates endosomal PIP2 levels via its lipid harboring SMP domain. Accumulation of PIP2 on endosomes is accompanied by impairment of their degradative capacity. Thus, cells use multiple redundant systems to maintain endosomal PIP2 homeostasis.Different types of cellular membranes have unique lipid compositions that are important for their functional identity. PIP C. elegans embryo to identify lipid transfer proteins and phosphatases that are critical for endosomal lipid homeostasis.Cellular membranes have distinct lipid compositions despite intermixing, and it is unclear why plasma membrane lipids do not accumulate on endosomes. Here, the authors use the In particular, the unique distributions of seven phosphoinositides (PIPs) within various membrane-bound organelles and the plasma membrane (PM) contribute to the distinct identities and functions of these membranes2. PIPs are generated by phosphorylation of the 3, 4, or 5 positions of the inositol head-group of its precursor, phosphatidylinositol, by dedicated kinases and phosphatases that localize to specific membrane compartments3. Among the seven PIPs, phosphatidylinositol 4,5-bisphosphate [PIP2] is particularly enriched in the PM4. PIP2 recruits proteins involved in non-muscle actomyosin contraction, including anillin, actin, non-muscle myosin (myosin II), RhoA GEF, and RhoA, either directly or indirectly to the PM. This enables contractility of the cell cortex10. During cell division, recruitment of these proteins to the contractile ring\u2014a unique region of the PM\u2014is essential for cleavage furrow ingression and subsequent cytokinesis13. PIP2 is also essential for cytoskeletal remodeling and clathrin-mediated endocytosis19. Despite the critical importance of PIP2 in these cellular processes, the mechanisms that prevent PIP2 from accumulating in intracellular membrane compartments are not fully understood.Cells establish and maintain specific lipid compositions within different cellular membranes2 in endosomal membranes are remarkably low. This is achieved, in part, through the coordinated actions of various PIP2 phosphatases. For example, the PIP2 5-phosphatase, OCRL1, facilitates the removal of PIP2 from endosomes by acting at multiple sites, including in endosomes and during late stages of clathrin-mediated endocytosis27. In addition, synaptojanin, another major PIP2 phosphatase with 4- and 5-phosphatase activities, removes PIP2 from clathrin-coated membranes and facilitates clathrin disassembly during clathrin-mediated endocytosis32. Importantly, mutations in OCRL1 and synaptojanin have been linked to human disorders, supporting the critical importance of maintaining cellular PIP2 homeostasis [Lowe syndrome and Dent\u2019s disease for OCRL135 and Parkinson\u2019s disease for synaptojanin39]. However, even in the absence of these phosphatases, levels of PIP2 in endosomes are not severely elevated in many cell types, indicating that there are other mechanisms for suppressing PIP2 accumulation in endosomes. Growing evidence suggests that lipid transfer proteins (LTPs) transport specific lipids, including PIPs, between different cellular compartments (independent of membrane trafficking), thereby helping to maintain unique lipid compositions of cellular membranes50. Thus, LTPs may act in parallel with PIP2 phosphatases to facilitate the removal of PIP2 from endosomal membranes to counteract the intermixing of bilayer lipids.Even with the constant intermixing of the PM with endosomal compartments through vesicular transport involving membrane budding and fusion reactions, levels of PIP59. One evolutionarily conserved family of LTPs that may regulate cellular phosphoinositide homeostasis are proteins that contain the synaptotagmin-like mitochondrial-lipid binding protein (SMP) domain. SMP proteins localize to various membrane contact sites60 and transport a wide variety of lipid species through their characteristic lipid-harboring SMP domain65. In metazoans, there are four classes of SMP proteins: E-Syts, TMEM24, PDZD8, and TEX2. E-Syts and TMEM24 localize to ER-PM contact sites and facilitate non-vesicular transport of diacylglycerol and phosphatidylinositol, respectively66. The functions of PDZD8 and TEX2 are less clear. In yeast, the TEX2 homolog, Nvj2, localizes to ER-vacuole contact sites at a steady state and moves to ER-Golgi contacts upon ER stress or ceramide overproduction. Nvj2 then facilitates the non-vesicular transport of ceramide from the ER to the Golgi to counteract ceramide toxicity68. In contrast, PDZD8 localizes to ER-mitochondria and ER-late endosome contacts. PDZD8 has been shown to control Ca2+ dynamics at ER-mitochondria contacts and suggested to regulate endosomal maturation at ER-endosome contacts72. Whether SMP proteins contribute to cellular phosphoinositide homeostasis remains unknown.LTPs function primarily at membrane contact sites, where membrane-bound organelles are in close apposition to one another and the PM2 homeostasis using the Caenorhabditis elegans\u00a0(C. elegans)\u00a0early embryo as our model system. By time-lapse imaging the earliest stages of embryogenesis, which is highly stereotypic in C. elegans, and systematically knocking out all four SMP proteins, we found that homologs of PDZD8 and TEX2 (PDZD-8 and TEX-2 in C. elegans) acted redundantly to suppress the build-up of endosomal PIP2. In the absence of PDZD-8 and TEX-2, PIP2 accumulated within endosomal membranes, resulting in ectopic recruitment of proteins that are normally involved in cortical actomyosin contraction to endosomes. Additional depletion of two PIP2 phosphatases, namely OCRL-1 and UNC-26 (homologs of OCRL1 and synaptojanin in C. elegans), exacerbated PIP2 accumulation, leading to massive appearance/clustering of abnormally large PIP2-enriched endosomes, lack of myosin II-mediated PM contraction, failure of cytokinesis, and embryonic lethality. In addition, the accumulation of PIP2 in endosomes results in major dysfunction of their degradative capacity. We found that both PDZD-8 and TEX-2 localized to the ER and that PDZD-8 is additionally localized to contacts that are formed between the ER and Rab-7-positive late endosomes. Further, our in vitro lipid transport assay revealed that the SMP domain of PDZD-8 transports various PIPs, including PIP2, between membranes. Accordingly, specific disruption of the PDZD-8 SMP domain was sufficient to induce aberrant accumulation of PIP2 on endosomes. Taken together, these results demonstrate that PDZD-8 regulates endosomal PIP2 levels via its SMP domain, acting together with PIP2 phosphatases and TEX-2 to counteract the build-up of endosomal PIP2. This maintains cellular membrane identities required for normal embryogenesis and cell division.In this study, we identified a critical role for SMP proteins in cellular PIP2 within the PM plays a critical role in the coordinated recruitment of various regulators of actomyosin contraction to the cell cortex75. To investigate the potential role of SMP proteins in PIP2 homeostasis in vivo, we chose the C. elegans early embryo as our model system because of its highly stereotypic cell divisions and transparency. In C. elegans there are four SMP proteins, namely C53B4.4, F55C12.5, R11G1.6, and ESYT-2 P2.PIPly) Fig.\u00a0. In addily) Fig.\u00a0. The funtm10626 mutant, which carries a tmem-24 deletion allele were generated. Early embryogenesis was imaged under spinning disc confocal (SDC) microscopy to assess potential changes in PIP2 distribution.Individual knockouts of PDZD-8, TEX-2, and ESYT-2 were obtained via CRISPR/Cas9-mediated gene editing, targeting exons that encode the SMP domain P2, as assessed by mCherry::PHPLC\u03b41, was uniformly distributed across the entire cell cortex in WT embryos throughout this process80 P2 in QKO embryos was particularly pronounced during polarity establishment and maintained through mitosis and cytokinesis P2 aberrantly accumulates in the cytoplasm as distinct puncta.After fertilization, sis Fig.\u00a0. These dC. elegans early embryos is enriched in PIP2, and therefore generally associated with high levels of non-muscle myosin II, NMY-2, which controls contractility of the PM and subsequent cytokinetic events83. To investigate whether aberrant accumulation of PIP2 in the cytoplasm results in ectopic myosin recruitment, QKO embryos expressing both mCherry::PHPLC\u03b41 and NMY-2-tagged with GFP (NMY-2::GFP) were imaged. Strikingly, NMY-2::GFP was present in most mCherry::PHPLC\u03b41 puncta, indicating ectopic recruitment of NMY-2 to aberrant, PIP2-positive cellular structures in QKO embryos . SDC-SIM revealed that NMY-2::GFP was recruited to highly mobile small vesicles Ples Fig.\u00a0. Extensiles Fig.\u00a0, as wellles Fig.\u00a0 was obse2, ANI-1, actin, and NMY-2, we examined the extent of co-localization between wrmScarlet::ANI-1 (or NMY-2::GFP) and various organelle markers, including lysotracker dye for lysosomes, mitotracker dye for mitochondria, GFP-tagged EEA-1 (GFP::EEA-1) for early endosomes86, GFP-tagged RAB-7 (GFP::RAB-7) for late endosomes, and GFP-tagged RAB-11.1 (GFP::RAB-11.1) for recycling endosomes87. No apparent co-localization was observed between NMY-2::GFP or wrmScarlet::ANI-1 and lysotracker, mitotracker, and GFP::EEA-1 were transiently positive for GFP::RAB-11.1, whereas 22.41% of wrmScarlet::ANI-1 puncta (n\u2009=\u200958 puncta from 27 embryos) were transiently positive for GFP::RAB-7, indicating that PIP2 accumulated on endosomal structures that possessed markers of late and recycling endosomes Pmes Fig.\u00a0.2 in the cell is generated by phosphatidylinositol 4-phosphate-5 kinases (type I PIP-kinases or PIP5Ks)3. In C. elegans, PPK-1 is the sole PIP5K88. To determine the subcellular localization of PPK-1 in early embryos, we introduced a sequence encoding EGFP-tagged PPK-1 (PPK-1::EGFP) into its endogenous locus by using CRISPR/Cas9 (see Methods). In WT embryos, PPK-1::EGFP signals were detected exclusively on the PM, where PIP2 is most enriched P2 in QKO embryos was not caused by ectopic recruitment of PPK-1 to endosomes P2 on endosomes. In their absence, PIP2 aberrantly accumulates on endosomes, leading to ectopic recruitment of various proteins involved in actomyosin contraction to these membrane compartments.These results suggest that SMP proteins inhibit the build-up of PIP2 and non-muscle myosin on endosomes. To identify which SMP proteins are involved in this process, we generated single, double, and triple knockouts of the four SMP proteins. These mutants were systematically examined for the presence of NMY-2::GFP puncta in early embryos. WT animals, as well as all single knockouts showed no cytoplasmic accumulation of NMY-2::GFP exhibited aberrant accumulation of NMY-2::GFP puncta Pcta Fig.\u00a0. Triple cta Fig.\u00a0. Interescta Fig.\u00a0. Howevercta Fig.\u00a0. These d2 in the PM and endosomes are tightly controlled by a balance of activities between PIP2 kinases and phosphatases, which localize to different cellular compartments. Several PIP2 phosphatases, including OCRL1 and synaptojanin, have been shown to suppress the accumulation of PIP2 on endosomal membranes32. We hypothesized that PDZD-8 and TEX-2 may act together with the\u00a0PIP2 phosphatases, OCRL-1 and UNC-26\u00a0(homologs of OCRL1 and synaptojanin\u00a0in C. elegans),\u00a0to inhibit the build-up of PIP2 on endosomes. To examine this possibility, we simultaneously depleted OCRL-1, UNC-26, PDZD-8, and TEX-2, and the resulting phenotype was compared to that of OCRL-1 and UNC-26 depletion alone.Levels of PIPunc-26 null mutants89, and unc-26; pdzd-8; tex-2 triple mutants were treated with RNA interference (RNAi) against OCRL-1. The resulting early embryos were imaged using SDC microscopy to examine the distribution of PIP2 (mCherry::PHPLC\u03b41) and NMY-2 (NMY-2::GFP). In WT or unc-26 mutant early embryos treated with OCRL-1 RNAi, PIP2 and NMY-2 remained tightly associated with the PM, although some cytoplasmic PIP2-positive structures were observed in unc-26 mutant embryos treated with OCRL-1 RNAi compared to WT embryos treated with OCRL-1 RNAi P2-positive vesicles with diameters that range from 0.5 to 3.8\u2009\u00b5m in the cytoplasm P2-positive vesicles clustered at the cellular periphery and became a sink for NMY-2::GFP, absorbing most NMY-2::GFP from the cell cortex P2-positive vesicles. This heterogeneity was explained by real-time imaging of NMY-2::GFP and the PIP2 vesicles over time. The association of NMY-2::GFP to large PIP2-positive vesicles was often transient, generally lasting only ~10\u2009s P2 levels in the PM and cytoplasm P2-positive vesicles) using mCherry::PHPLC\u03b41 and found a dramatic increase in PIP2 levels in intracellular membrane compartments in unc-26; DKO mutants treated with OCRL-1 RNAi compared to WT or unc-26 single mutants treated with OCRL-1 RNAi , respectively Fig.\u00a0.2-positive vesicles in unc-26; DKO early embryos treated with OCRL-1 RNAi were originated from the PM, endocytosis was blocked by treating them with RNAi to deplete dynamin, an essential protein that catalyzes membrane fission during clathrin-mediated endocytosis93. RNAi of DYN-1, a sole homolog of dynamin in C. elegans, resulted in severe early embryonic lethality after fertilization, making it difficult to isolate early embryos for microscopy. Thus, one-cell embryos were directly imaged within the uterus of adult worms without dissection P2 phosphatases to inhibit the build-up of PIP2 on endosomes after PM PIP2 enters endosomal systems through endocytosis. Simultaneous depletion of all these components results in massive accumulation of PIP2 within large, abnormal vesicles, leading to aberrant distribution of actomyosin and cytokinesis defects.These results support our hypothesis that PDZD-8 and TEX-2 act together with PIPC. elegans embryo, a network of actin and non-muscle myosin forms at the cell cortex and induces actomyosin contractility76. This actomyosin contractility then regulates the periodic contraction of the PM during polarity establishment 82, which leads to ingression of the pseudocleavage furrow. Mislocalization of NMY-2 from the PM to aberrant, large PIP2-positive vesicles in unc-26; DKO mutants treated with OCRL-1 RNAi and unc-26; DKO mutants (34.8% of WT) and unc-26; DKO mutants (17.8% of WT) but not in unc-26 mutants (only ~10% reduction) Fig.\u00a0. Howeveron) Fig.\u00a0. Thus, s2 homeostasis. To gain mechanistic insights into their functions, we examined the subcellular localization of these proteins in early embryos by using CRISPR/Cas9-based methods to introduce sequences encoding mNeonGreen-tagged PDZD-8 (PDZD-8::mNeonGreen) or EGFP-tagged TEX-2 (TEX-2::EGFP) into their endogenous loci (see Methods). Both PDZD-8::mNeonGreen and TEX-2::EGFP signals were detected in nerve rings within the head region of adult worms, confirming the successful tagging of the endogenous loci PC. elegans early embryos or TEX-2::EGFP heterologously in COS-7 cells. PDZD-8::EGFP and TEX-2::EGFP co-localized with the ER marker, RFP-Sec61\u03b2, consistent with their localization in sis Fig.\u00a0, we examsis Fig.\u00a0. In contsis Fig.\u00a0. TEX-2::sis Fig.\u00a0.C. elegans early embryos. To this end, we generated another knock-in strain that expresses mCherry-tagged RAB-7 from its endogenous locus (mCherry::RAB-7) (see Methods). mCherry::RAB-7 signals were well detected on vesicular structures in early embryos and asked whether it could transport PIPs, including PIP2, between membranes. We assessed transport via a fluorescence resonance energy transfer (FRET)-based in vitro lipid transport assay. We prepared donor liposomes (4%), and Rhodamine-PE (2%)] and acceptor liposomes [DOPC (100%)] and mixed them with an NBD-labeled PH domain from the FAPP protein . We then monitored the transport of PIP from donor to acceptor liposomes P2 and acceptor liposomes resulted in the dequenching of NBD signals over time in a concentration-dependent manner, revealing that SMPPDZD-8 can transport PIP2 between liposomes P, PI(4)P, and PIP2 with similar transport efficiency to PIP2, while it transported PIP3 less efficiently between membranes. In this assay, donor liposomes and acceptor liposomes [DOPC (100%)] were prepared and mixed with an NBD-labeled C2 domain from Lactadherin (instead of an NBD-PHFAPP used for PIP transport assays), and transport of PS from donor to acceptor liposomes was monitored. The addition of the purified SMPPDZD-8 to the mixture of donor and acceptor liposomes resulted in the dequenching of NBD signals in a concentration-dependent manner over time, similar to PIP2 , 90% DOPC] and acceptor liposomes and monitored FRET between transported DHE and DNS-PE on acceptor liposomes. The addition of SMPPDZD-8 did not result in DHE transport between liposomes , NBD-labeled lipid (4%), and Rhodamine-PE (2%)] and acceptor liposomes [DOPC (100%)] and monitored dequenching of NBD fluorescence of NBD-labeled lipids over time. We could not detect significant dequenching for NBD-PE and NBD-ceramide with the addition of SMP2 transport mediated by SMPPDZD-8. The addition of either one of PS, PI, or PA, in donor liposomes did not reduce the efficiency of PIP2 transport between liposomes, suggesting that the presence of other anionic lipids in donor membranes does not interfere with the property of SMPPDZD-8 to transport PIP2 PPDZD-8 transports various PIPs, including PIP2, and other phospholipids, such as PS and possibly PA, but not cholesterol and some lipids.Taken together, these results suggest that SMPpdzd-8 lacking the SMP domain [pdzd-8 (\u0394SMP)] at the endogenous locus via CRISPR/Cas9-based gene editing . The resulting early embryos were imaged under SDC microscopy to examine the presence of cytoplasmic NMY-2::GFP puncta. In tex-2; esyt-2; tmem-24 mutants, pdzd-8 (\u0394SMP) induced cytoplasmic puncta of NMY-2::GFP to levels comparable to those seen in QKO animals, confirming the importance of the SMP domain to PDZD-8 function in vivo would block lipid insertion into its hydrophobic groove ] from the endogenous locus , crossing the pdzd-8 mutant into tex-2; esyt-2; tmem-24 mutants induced cytoplasmic puncta of NMY-2::GFP, supporting the importance of SMP-mediated lipid transport to PDZD-8 function ] was inserted into the endogenous locus via CRISPR/Cas9-based gene editing into mutant animals that lack PDZD-8, ESYT-2, and TMEM-24 induced cytoplasmic puncta of NMY-2::GFP similar to those seen in QKO animals, supporting the importance of the TEX-2 SMP domain in TEX-2 function in vivo and tex-2 (\u0394SMP) with additional knockouts of tmem-24 and esyt-2 . We then treated these mutants with OCRL-1 RNAi and compared the resulting early embryos under SDC microscopy to assess changes in PIP2 distribution by mCherry::PHPLC\u03b41. We observed massive clustering of large PIP2-positive vesicles, with increased levels of PIP2 in the cytoplasm compared with the PM, in both mutant backgrounds compared to unc-26 mutant background P2 occurs on large abnormal vesicles ] were monitored under SDC microscopy to assess the potential accumulation of PIP2 on late endosomes. This was accomplished using mCherry::PHPLC\u03b41 and the late endosome marker, GFP::RAB-7 P2-positive vesicles with two other endosomal markers, namely GFP::RAB-5 for early endosomes and GFP::RAB-11.1 for recycling endosomes ] and GFP::RAB-11.1 [39.2\u2009\u00b1\u20093.4% (n\u2009=\u200914 embryos)], suggesting that PIP2 accumulates throughout the endosomal system in the absence of PDZD-8, TEX-2, and the PIP2 phosphatases Ples Fig.\u00a0. While bles Fig.\u00a0. Furthernes Fig.\u00a0. Thus, t2 on endosomes in the simultaneous absence of PDZD-8, TEX-2, and the PIP2 phosphatases may have caused disruption of endosomal functions. In C. elegans, CAV-1, a homolog of caveolin-1, is rapidly endocytosed from the PM and degraded via clathrin-mediated endocytosis within one cell cycle after fertilization100. Using GFP-tagged CAV-1 (CAV-1::GFP) as model cargo for degradation, we assessed potential changes in the degradative capacity of endosomes in early embryos from OCRL-1 RNAi-treated unc-26 mutants lacking both\u00a0PDZD-8 and TEX-2 . In WT embryos, CAV-1::GFP was rapidly endocytosed from the PM and degraded before completion of cytokinesis as previously reported100 , however, degradation of CAV-1::GFP was markedly delayed, and undegraded CAV-1::GFP signals were clearly present in the cytoplasm even after completion of cytokinesis and late endosomes (marked by endogenously tagged mCherry::RAB-7) , while 30.8\u2009\u00b1\u20093.1% of late endosomes were positive for CAV-1::GFP (n\u2009=\u200913 embryos) compared to WT P2 in these mutant embryos P100 Fig.\u00a0. In embrsis Fig.\u00a0. Strikin-7) Fig.\u00a0. 47.7\u2009\u00b1\u2009os) Fig.\u00a0. These ryos Fig.\u00a0 may have2 phosphatases lead to massive accumulation of PIP2 in endosomes, which disrupted their degradative capacity. This alteration in turn resulted in the ectopic recruitment of the actomyosin machinery to aberrant endosomes, defects in cytokinesis, and the absence of cortical ruffling, which collectively led to massive embryonic lethality in C. elegans.Taken together, our results demonstrate that the simultaneous absence of SMP proteins and PIP2 homeostasis is maintained during early embryogenesis in vivo by the SMP proteins, PDZD-8 and TEX-2. We have also shown that these proteins work together with PIP2 phosphatases to prevent the build-up of PIP2 on endosomes. Key findings of the current study are the following:C. elegans early embryos as our in vivo model system, we found that depleting all four SMP proteins led to accumulation of PIP2 on endosomes. This PIP2 accumulation resulted in ectopic recruitment of anillin and actomyosin , from the PM to endosomes. Further, we examined single, double, triple, and quadruple knockouts of SMP proteins and found that PDZD-8 and TEX-2 played critical roles in preventing PIP2 accumulation on endosomes.Using 2 phosphatases, OCRL-1, and UNC-26/synaptojanin, to suppress the build-up of PIP2 on endosomes. In their simultaneous absence, PIP2 accumulated within aberrantly large endosomal membranes, inducing ectopic recruitment of myosin II to these membranes. This resulted in parallel reductions in actomyosin contraction at the PM, leading to embryonic lethality due to defects in both cleavage furrow ingression and cortical ruffling. These data indicate that it is critically important to limit endosomal PIP2 levels for cell viability.We found that PDZD-8 and TEX-2 act together with the PIP2, between membranes. Targeted deletion of the PDZD-8 and TEX-2 SMP domains disrupted PIP2 homeostasis similar to their complete absence. Further, introducing mutations into the PDZD-8 SMP domain that are predicted to prevent it from capturing lipids was sufficient to disrupt the cellular function of PDZD-8.Both TEX-2 and PDZD-8 localize to the ER and PDZD-8 also localizes to the site of contacts formed between the ER and late endosomes that are marked by RAB-7. Using an in vitro lipid transfer assay, we found that purified PDZD-8 SMP domains transported various PIPs, including PIP2 accumulated not only on late endosomes but also on early and recycling endosomes in PDZD-8, TEX-2, OCRL-1, UNC-26 quadruple mutants. This disrupted the degradative capacity of the endosomal system. Our results support a model whereby PDZD-8 regulates endosomal PIP2 levels at ER-late endosome contact sites via the lipid-harboring SMP domain to help maintain the distribution of PIP2 in cellular membranes. Further, PDZD-8 functioned redundantly with TEX-2, OCRL-1, and UNC-26 (see below for further discussion).We found that PIPWe have demonstrated that endosomal PIPC. elegans, PIP2 forms a dynamic cortical structure during early embryogenesis that overlaps with F-actin and is coincident with various actin regulators, such as RHO-1, CDC-42, and RhoGEF/ECT-27. However, how PIP2 is restricted to the PM and its physiological relevance to early embryogenesis have both remained unclear. Emerging evidences from various model systems, including yeast, flies, and human cells, support the importance of PIP2 enrichment in the PM in the coordinated recruitment of various proteins to the PM for actomyosin contractility and cytokinesis74. In humans and yeast, PIP2 directly binds key regulators of these processes, including RhoGEF/ECT2, RhoA (RhoA in humans and Rho-1 in budding yeast), and anillin (Mid1 in yeast)92. Anillin is an evolutionarily conserved scaffold protein that links RhoA, F-actin, and myosin II to the cell cortex for coordinating cell divisions and various other contractile processes, including epithelial mechanics and cell-cell adherens junctions107. A recent study in human cells suggested that anillin promotes cell contractility by inhibiting the dissociation of active GTP-bound RhoA from the PM8. On the other hand, pulsed activation and inactivation of RhoA was reported to precede PM recruitment of anillin during cortical ruffling in C. elegans early embryos82. Regardless of the sequence by which anillin and RhoA are recruited to the PM, the actomyosin contractile network that is regulated by these proteins is essential for both cortical contractility and subsequent furrow formation113. We found that the simultaneous depletion of PDZD-8 and TEX-2 resulted in aberrant accumulation of PIP2 on a subset of endosomes that was accompanied by the ectopic presence of anillin, myosin II, and actin. Additional depletion of the PIP2 phosphatases, OCRL-1, and UNC-26, induced the aberrant appearance of large PIP2-positive endosomes that further sequestered myosin II and anillin, leading to severe defects in cytokinesis and embryonic lethality. In this context, cortical ruffling was halted, reflecting general defects in actomyosin contractility at the cell cortex. Thus, our results are consistent with the critical importance of PIP2 in the PM for anchoring the actomyosin contractile network to the cell cortex. When levels of endosomal PIP2 rose above a certain threshold, regulators of the actomyosin contractility, including anillin, myosin II, and actin, were ectopically recruited to endosomes, causing insufficient cortical contractility and failure to form a furrow. Although growing evidence suggests a role for endosomal PIP2 in cell physiology4, our results demonstrate that highly redundant functions of PIP2 phosphatases and SMP proteins keep levels of endosomal PIP2 below a certain threshold to prevent failure of cytokinesis and animal development in vivo.In 2 levels. Targeted deletion of the SMP domains from these two proteins was sufficient to cause the accumulation of PIP2 in endosomes. TEX-2 and PDZD-8 are both localized to the ER, whereas PDZD-8 also localized to contact sites between the ER and late endosomes. Further, we found that the SMP domain of PDZD-8 was able to extract and transport various PIPs, including PIP2, between membranes in vitro. Based on these results, we suggest that PDZD-8 may act at ER-late endosome contacts to extract PIP2 from late endosomes via its SMP domain, while a possibility that PDZD-8 may primarily act as a tethering protein for ER-late endosome contacts cannot be excluded P2 could be transported to the ER for its dephosphorylation at\u00a0the ER. Second, PIP2 could be presented to OCRL-1 and UNC-26 for its rapid dephosphorylation on endosomes. As the phenotype resulting from the combined depletion of PDZD-8, TEX-2, OCRL-1, and UNC-26 was much stronger than the phenotype resulting from the depletion of OCRL-1 and UNC-26 alone, we favor the first scenario in which other PIP2 phosphatases act on the ER \u201cin cis\u201d to remove PIP2 that is transported from endosomes, although we do not have direct evidence supporting a vectorial transfer of PIP2 from endosomes to the ER by PDZD-8/TEX-2 in the current study. One potential candidate for an ER-anchored PIP2 phosphatase is INPP5K114. Indeed, ER-endosome contacts have been implicated in the homeostatic regulation of another phosphoinositide, phosphatidylinositol 4-phosphate (PI4P), via concerted actions of non-vesicular PI4P transport from endosomes to the ER by an LTP [oxysterol-binding protein (OSBP)] and dephosphorylation of PI4P \u201cin cis\u201d by an ER-anchored PI4P phosphatase, Sac1115. Whether INPP5K and/or any other PIP2 phosphatases are involved in endosomal PIP2 homeostasis requires further investigation. Further studies are also needed to better understand the cross-talk between PDZD-8 and TEX-2. While we could not detect a particular association of TEX-2 at ER-endosome contacts, TEX-2 may transiently populate ER-endosome contacts to help maintain endosomal PIP2 homeostasis. As the SMP domain of PDZD-8 is able to transport various lipids between membranes in vitro, PDZD-8 may also regulate the distribution of other lipids in addition to PIP2.We found that PDZD-8 and TEX-2 require their SMP domains to function properly to control endosomal PIPded Fig.\u00a0. There a2 were dramatically elevated on endosomes that were positive for early, recycling, and late endosome markers. Massive accumulation of PIP2 on endosomes disrupted their degradative capacity as evidenced by the failure of the degradation of CAV-1 in early embryos. Our results suggest that PIP2 on these endosomes originated from the PM based on (1) the absence of ectopic accumulation of PPK-1, a sole homolog of PIP5K in C. elegans, on PIP2-positive endosomes in mutants lacking\u00a0SMP proteins; (2) the suppression of cytoplasmic PIP2 accumulation in early embryos from unc-26; DKO mutants treated with OCRL-1 RNAi by blocking endocytosis with RNAi against\u00a0DYN-1, a sole homolog of dynamin in C. elegans. In mammals, ORP2, an LTP that belongs to the OSBP family, has been implicated in regulating the levels of PIP2 in various cellular compartments, including the PM50 and recycling endosomes116. Thus, potential dysregulation of other LTPs involved in PIP2 regulation may have also contributed to aberrant PIP2 accumulation in the absence of SMP proteins.In the absence of PDZD-8, TEX-2, OCRL-1, and UNC-26, levels of PIPC. elegans PDZD-8)\u00a0localizes to various membrane contact sites, including ER-mitochondria and ER-endosome contact sites72. PDZD8 interacts with Protrudin and localizes to ER-late endosome contacts via its interaction with Rab772. Based on its localization, it was proposed that PDZD8 may play a role in endosomal maturation72. The SMP domain of PDZD8 was reported to extract and transport various NBD-labeled lipids in vitro117. Further, Shirane et al. showed that co-expression of PDZD8 and Protrudin induces abnormally large vacuoles, proposing a model whereby PDZD8 transports lipids from the ER to late endosomes72. Our current study shows that the SMP domain of PDZD-8 transports PIPs, including PIP2, in vitro. Furthermore, we show that PDZD-8 function together with TEX-2 and PIP2 phosphatases to play a critical role in maintaining endosomal PIP2 homeostasis via its SMP domain in C. elegans early embryos, providing insights into the physiological function of this evolutionarily conserved protein at ER-late endosome contacts.In mammalian cells, PDZD8 \u00a0have both been linked to human disorders, supporting the critical importance of cellular PIP2 homeostasis in normal cell physiology118. In Drosophila S2 cells, removal of dOCRL (the homolog of OCRL1\u00a0in Drosophila) results in ectopic accumulation of PIP2 at enlarged endosomal compartments. This leads to the aberrant recruitment of the cytokinetic machinery and severe defects in cleavage furrow formation and cytokinesis20, similar to what we observed in mutants simultaneously lacking PDZD-8, TEX-2, UNC-26, and OCRL-1 in C. elegans early embryos. However, cytokinesis defects are less pronounced in human cells lacking OCRL1119. Likewise, mice lacking synaptojanin 1 are viable at birth29. Thus, other proteins likely function redundantly to control endosomal PIP2 homeostasis in\u00a0human\u00a0cells. Our results are consistent with this notion, providing supporting\u00a0evidence that OCRL1 and synaptojanin may\u00a0act redundantly with the SMP proteins, PDZD8 and TEX2, to prevent build-up of PIP2 in endosomes. Future studies are needed to examine the role of mammalian PDZD8 and TEX2 in endosomal PIP2 homeostasis. Modulating their functions may help alleviate some disease conditions caused by mutations in OCRL1 or synaptojanin.Finally, mutations in OCRL1 and synaptojanin \u2009+\u2009unc-119(+)], zuIs45[nmy-2::NMY-2::GFP\u2009+\u2009unc-119(+)], zbIs2(pie-1::lifeACT::RFP), ojIs35[pie-1::GFP::rab-11.1\u2009+\u2009unc-119(+)], pwIs20[pie-1p::GFP::rab-5\u2009+\u2009unc-119(+)], weIs15[pie-1p::GFP::eea-1(FYVEx2) + unc-119(+)], ocfIs2[pie-1p:mCherry::sp12::pie-1 3\u2019UTR\u2009+\u2009unc-119(+)], ojIs23 [pie-1p::GFP::C34B2.10], pwIs28[pie-1p::cav-1::GFP(7) + unc-119(+)], unc-26(s1710) and R11G1.6(tm10626). Details of C. elegans strains used in this study are listed in Supplementary Table\u00a0All strains were grown on nematode growth medium (NGM) plates that were seeded with pdzd-8 [pdzd-8(syb664) and pdzd-8(syb2977)], F55C12.5/tex-2 [tex-2(syb670) and tex-2(syb2190)], and esyt-2(syb709) were generated by SunyBiotech Corporation using CRISPR/Cas9 genome editing method. For null deletion alleles, 1864\u2009bp (315\u2009bp to 2178\u2009bp) was deleted from pdzd-8 to generate pdzd-8(syb664), 1960\u2009bp (3877\u2009bp to 5836\u2009bp) was deleted from tex-2 to generate tex-2(syb670), and 1902\u2009bp (627\u2009bp to 2528\u2009bp) of esyt-2 was deleted from esyt-2 to generate esyt-2(syb709). For SMP domain-specific deletion alleles, 1237\u2009bp (317\u2009bp to 1553\u2009bp) was deleted from pdzd-8 to generate pdzd-8(syb2977), and 1082\u2009bp (4784\u2009bp to 5865\u2009bp) was deleted from tex-2 to generate tex-2(syb2190). \u201cA\u201d of the initiation codon (ATG) of each gene is defined as 1\u2009bp.Specific deletion alleles of C53B4.4/tex-2(yas46[F55C12.5::EGFP^3xFLAG]) was generated using the CRISPR/Cas9 genome editing method121. A CRISPR/Cas9 target site around 3\u2032 end of tex-2 gene was selected using the Cas9 guide RNA design tool (http://crispr.mit.edu)122. The gRNA used was: TATTGTATCCCATGTGGAGTAGG. The Cas9-sgRNA construct, pDD162_F55C12.5_gRNA (DJ53), together with co-injection markers and the plasmid pDD282_F55C12.5_GFP (DJ44), containing the donor template [GFP and self-excision cassette (SEC) flanked by tex-2 homology arms], were microinjected into N2 worms. The SEC was removed by heat shock, and SAH235 carrying TEX-2 that is tagged with EGFP at its C-terminus was generated. Worms expressing wrmScarlet tagged to the N-terminus of ANI-1 at endogenous expression levels, ani-1(syb1710[wrmScarlet::ANI-1]), were generated by SunyBiotech Corporation using the CRISPR/Cas9 genome editing method. cDNA encoding wrmScarlet was inserted right after the initiation codon (ATG) of ani-1. Worms expressing mNeonGreen tagged to the C-terminus of C53B4.4/PDZD-8 at endogenous expression levels, C53B4.4 (syb4099[C53B4.4::mNeongreen]), were generated by SunyBiotech Corporation using the CRISPR/Cas9 genome editing method. cDNA encoding mNeonGreen was inserted in-frame in the last common exon of C53B4.4/pdzd-8. Worms expressing EGFP tagged to the C-terminus of PPK-1 at endogenous expression levels, ppk-1(syb4109), were generated by SunyBiotech Corporation using the CRISPR/Cas9 genome editing method. cDNA encoding mCherry was inserted right after the initiation codon (ATG) of rab-7. CTT at 493\u2009bp to 495\u2009bp (Leucine) and ATC at 1107\u2009bp to 1109\u2009bp (Isoleucine) of C53B4.4/pdzd-8 were sequentially replaced with TGG (Tryptophan) using CRISPR/Cas9 genome editing method to generate C53B4.4(syb3276) by SunyBiotech Corporation.A knock-in allele of rab-7(sybIs2380, supplemented with protease inhibitors together with the cocktail of 100\u2009\u03bcg/ml lysozyme [at least 30\u2009min incubation for C2Lact (C270A/C427A/H352C)] (Sigma-Aldrich/Merck) and 50\u2009\u03bcg/ml DNAse I (Sigma-Aldrich/Merck). Cells were lysed with sonication on ice in a Vibra Cell . The lysate was clarified by centrifugation at 47,000\u2009\u00d7\u2009g, at 4\u2009\u00b0C for 20\u2009min. The supernatants were incubated at 4\u2009\u00b0C for 30\u2009min with Ni-NTA resin (ThermoFisher Scientific), which had been equilibrated with 2.5\u2009ml of wash buffer 1 . The protein-resin mixtures were then loaded onto a column to be allowed to drain by gravity. The column was washed with 10\u2009ml of wash buffer 1 once and 10\u2009ml of wash buffer 2 once, and then eluted with 1.25\u2009ml of elution buffer 1 . The proteins were then concentrated using Vivaspin 20 MWCO 10\u2009kDa and further purified by gel filtration with elution buffer 2 , using the AKTA Pure system . Relevant peaks were pooled, and the protein sample was concentrated.All proteins were overexpressed in FAPP (T13C/C37S/C94S) proteins or purified C2Lact (C270A/C427A/H352C) proteins were mixed with a tenfold excess of N,N\u2032-dimethyl-N-(iodoacetyl)-N\u2032-ethylenediamine after removal of TCEP by gel filtration on Superdex 75 increase 10/300 GL . After ~14\u2009h incubation by rotating the mixture in a cold room, the reaction was stopped by adding a tenfold excess of l-cysteine in the mixture. The free IANBD-amide and excess l-cysteine were removed from the mixture by gel filtration on Superdex 75 increase 10/300 GL . The success of NBD labeling was checked by SDS\u2013PAGE.Purified PH2 gas, followed by further drying in the vacuum for 2\u2009h. Mole% of lipids used for the acceptor and donor liposomes in FRET-based lipid transfer assays are shown in Supplementary Table\u00a02 and 37\u2009\u00b0C water bath) followed by extrusion 11 times using Nanosizer with a pore size of 100\u2009nm (T&T Scientific Corporation). All liposome-based lipid transfer assays were performed in a 96-well plate (Corning) using a Synergy H1 microplate reader . All reactions were performed in 60\u2009\u03bcl volumes with a final lipid concentration at 0.6\u2009mM. A buffer of the purified proteins was replaced with HK buffer prior to all the lipid transfer assays. The values of blank solution (buffer only without liposomes or proteins) were subtracted from all the values from each time point.Lipids in chloroform were dried under a stream of N97 with slight modifications. Donor , and acceptor [100% DOPC] liposomes were added at a 1:1 ratio. PIP transfer mediated by SMPPDZD-8 was followed by measuring the NBD fluorescence signals of NBD-PHFAPP (0.5\u2009\u03bcM) at 530\u2009nm (bandwidth 12.5\u2009nm) on excitation at 460\u2009nm (bandwidth 12.5\u2009nm) for 60\u2009min at every 15\u2009s at room temperature. The amount of PIP (in \u03bcM) transferred from donor to acceptor liposomes corresponds to 6\u2009\u00d7\u2009FNorm, where FNorm\u2009=\u20090.5\u2009\u00d7\u2009(F\u2009\u2212\u2009F0/Feq\u2009\u2212\u2009F0). F0 corresponds to the NBD signal prior to the addition of SMPPDZD-8, and Feq corresponds to the equilibrium condition where both donor and acceptor liposomes contain equal percentages of PIP . Liposome-binding assays indicated that NBD-PHFAPP (0.5\u2009\u03bcM) is completely bound to the membrane for a surface-accessible amount of PIP above 4\u2009\u03bcM was followed by measuring the NBD fluorescence signals of NBD-C2Lact (0.5\u2009\u03bcM) at 530\u2009nm (bandwidth 12.5\u2009nm) on excitation at 460\u2009nm (bandwidth 12.5\u2009nm) for 60\u2009min at every 15\u2009s at room temperature. The amount of PS (in \u03bcM) transferred from donor to acceptor liposomes corresponds to 6\u2009\u00d7\u2009FNorm, where FNorm\u2009=\u20090.5\u2009\u00d7\u2009(F\u2009\u2212\u2009F0/Feq\u2009\u2212\u2009F0). F0 corresponds to the NBD signal prior to the addition of SMPPDZD-8, and Feq corresponds to the equilibrium condition where both donor and acceptor liposomes contain equal percentages of PS . Initial transport rates was followed by measuring the NBD fluorescence signals of NBD-labeled lipid at 538\u2009nm on excitation at 460\u2009nm for 60\u2009min at every 15\u2009s at room temperature. Ten microliters of 2.5% DDM solution was added into each well at the end of 60\u2009min recording, and maximum NBD signals of NBD-labeled lipid in solubilized liposomes were measured for another 10\u2009min. The values at the end of 10\u2009min recording were considered as maximum NBD fluorescence and used to calculate the percentage of max NBD fluorescence , 2% Rhodamine-PE, 98% DOPC] and acceptor [100% DOPC] liposomes were added at a 1:1 ratio. Transfer of NBD-labeled lipid-mediated by SMP99. Donor and acceptor liposomes were added at a 1:1 ratio. Reactions were initiated by the addition of SMPPDZD-8 to the mixture of donor and acceptor liposomes. The fluorescence intensity of DNS-PE , resulting from FRET between DHE (excited at 310\u2009nm) and DNS-PE, was monitored at 525\u2009nm every 15\u2009s over 30\u2009min at room temperature. Data were expressed as the number of DHE molecules transferred using a calibration curve, which was obtained by measuring FRET signals of liposomes containing various percentages of DHE, 2.5% DNS-PE, and DOPC99. Then, the mole number of the transferred DHE from the donor to acceptor liposomes was obtained using the formula derived from the linear fit of the calibration curve. The DHE transport rates 2 biosensor, mCherry::PHPLC\u03b41 to segment PIP2 puncta from the background (pixel size cut-off: 12-infinity). Quantification was performed every 12 frame, or 60\u2009s interval. The area and mean fluorescence intensity of each PIP2 puncta were measured, and total fluorescence intensity of all PIP2 puncta was calculated for each frame. The number of PIP2 puncta was manually determined based on the segmented images for each frame.The same arbitrary threshold (110-65535) was applied to the time-lapse images of 5\u2009s interval for both control and mutant embryos, expressing PIPThe same arbitrary threshold (118-65535) was applied to the time-lapse images of 5\u2009s interval for both control and mutant embryos, expressing NMY-2::GFP, to segment NMY-2 puncta from the background (pixel size cut-off: 15-infinity). Quantification was performed every 12 frame, or 60\u2009s interval. The area and mean fluorescence intensity of each NMY-2 puncta were measured, and total fluorescence intensity of all NMY-2 puncta was calculated for each frame. The number of NMY-2 puncta was manually determined based on the segmented images for each frame.2 biosensor, mCherry::PHPLC\u03b41, to segment cytoplasmic PIP2 structures from the background (pixel size cut-off: 12-infinity). Process>Noise>Despeckle command was used to further remove the background signals from the segmented images. Quantification was performed every 12 frame, or 60\u2009s interval. ROI was drawn around the entire cytoplasmic region excluding the PM, and the area and mean fluorescence intensity of mCherry::PHPLC\u03b41 signals were measured. Total fluorescence intensity of mCherry::PHPLC\u03b41 (product of area and mean fluorescence intensity) was then plotted.The same arbitrary threshold (110-65535) was applied to the time-lapse images of 5\u2009s interval for both control and mutant embryos, expressing PIP2 biosensor, mCherry::PHPLC\u03b41, to segment cytoplasmic PIP2 structures from the background (pixel size cut-off: 12-infinity). Process>Noise>Despeckle command was used to further remove the background signals from the segmented images. ROI was drawn around the entire cytoplasmic region excluding the PM of the first embryo in the uterus region of each adult worm, and the area and mean fluorescence intensity of mCherry::PHPLC\u03b41 signals were measured. Total fluorescence intensity of mCherry::PHPLC\u03b41 (product of area and mean fluorescence intensity) was then plotted.The same arbitrary threshold (110-65535) was applied to the images from RNAi-treated control and mutant worms, expressing PIPThe same arbitrary threshold (188-65535) was applied to the time-lapse images of 5\u2009s interval for both control and mutant embryos, expressing NMY-2::GFP to segment cytoplasmic myosin structures from the background (pixel size cut-off: 15-infinity). Quantification was performed every 12 frame, or 60\u2009s interval. ROI was drawn around the entire cytoplasmic region excluding the PM, and the area and mean fluorescence intensity of NMY-2::GFP signals were measured. Total fluorescence intensity of NMY-2::GFP (product of area and mean fluorescence intensity) was then plotted.2 structure in each frame of the time-lapse images of 5\u2009s interval for both control and mutant embryos, expressing PIP2 biosensor, mCherry::PHPLC\u03b41, and the diameter of the PIP2 structure was determined. Quantification was performed every 12 frame, or 60\u2009s interval. Gaussian curves were superimposed onto the frequency distribution of the diameter of cytoplasmic PIP2 structures.Line was manually drawn across the largest cytoplasmic PIP2 biosensor, mCherry::PHPLC\u03b41, to segment PIP2-positive membranes from the background (pixel size cut-off: 5-infinity). Process>Noise>Despeckle command was used to further remove the background signals from the segmented images. The area and mean fluorescence intensity of mCherry::PHPLC\u03b41 signals at endosomal membranes and total membranes were measured, and total fluorescence intensity of mCherry::PHPLC\u03b41 (product of area and mean fluorescence intensity) was used as an estimate of PIP2 levels in each compartment. First, ROI was drawn around cytoplasmic mCherry::PHPLC\u03b41 signals to determine PIP2 levels in endosomal membranes. Second, ROI was drawn around the entire embryo to determine total PIP2 levels. To obtain PIP2 levels in the PM, mCherry::PHPLC\u03b41 signals in endosomal membranes was subtracted from the total mCherry::PHPLC\u03b41 signals.The same arbitrary threshold (108-65535) was applied to a single frame (beginning of cleavage furrow ingression) of the original time-lapse images of 5\u2009s interval for both control and mutant embryos, expressing PIPLine was drawn across vesicular structures to plot fluorescence profiles of both GFP and mCherry/wrmScarlet signals. The normalized fluorescence profiles were plotted on the same graph to visualize the extent of co-localization.Representative images were selected from the polarity establishment phase and the mitosis phase of the same embryos undergoing early embryogenesis. The outline of each embryo was drawn manually for both the polarity establishment phase and mitosis phase, and the perimeter of the outline was measured. The perimeter of the embryo at the polarity establishment phase was divided by that of the polarity maintenance phase to obtain the cortical ruffling index.The same arbitrary threshold (160-65535) was applied to the time-lapse images of 5\u2009s interval for both control and mutant embryos expressing CAV-1::GFP to segment cytoplasmic and PM CAV-1 from the background (pixel size cut-off: 12-infinity). Process>Noise>Despeckle command was used to further remove the background signals from the segmented images. Quantification was performed every 12 frame or 60\u2009s interval. For cytoplasmic CAV-1 signals, ROI was drawn around the entire cytoplasmic region excluding the PM, and the area and mean fluorescence intensity of cytoplasmic CAV-1::GFP signals were measured. For PM CAV-1 signals, ROI was drawn around the entire embryo to determine total CAV-1 levels. Cytoplasmic CAV-1::GFP signals were then subtracted from the total CAV-1::GFP signals to obtain PM CAV-1::GFP signals. The total fluorescence intensity of CAV-1::GFP (product of area and mean fluorescence intensity) was then plotted.131. In brief, the arbitrary threshold was applied to a single frame (beginning of cleavage furrow ingression) of the images of control and mutant embryos . Linear fits for lipid transfer assay and nonlinear fits (Gaussian curves) for the size distribution of cytoplasmic PIP2 structures were performed using Prism 8.0.1. Unless p\u2009<\u20090.0001, exact P values are shown within the figure legends for each figure. p\u2009>\u20090.05 was considered not significant. All data were presented as mean\u2009\u00b1\u2009SEM unless otherwise noted.No statistical method was used to predetermine sample size, and the experiments were not randomized for live-cell imaging. Sample size and information about replicates are described in the figure legends. For obtaining representative micrographs, experiments were independently conducted at least eight times to confirm reproducibility. An exception is an experiment in Supplementary Fig.\u00a0Supplementary InformationPeer Review FileSupplementary Movie 1Supplementary Movie 2Supplementary Movie 3Supplementary Movie 4Supplementary Movie 5Supplementary Movie 6Supplementary Movie 7Supplementary Movie 8Supplementary Movie 9Supplementary Movie 10Supplementary Movie 11Description of additional supplementary files"} +{"text": "Acinetobacter baumannii is a multiantibiotic-resistant pathogen of global health care importance. Understanding Acinetobacter virulence gene regulation could aid the development of novel anti-infective strategies. Acinetobacter baumannii possesses a single divergent luxR/luxRI-type quorum-sensing (QS) locus named abaR/abaI. This locus also contains a third gene located between abaR and abaI, which we term abaM, that codes for an uncharacterized member of the RsaM protein family known to regulate N-acylhomoserine lactone (AHL)-dependent QS in other beta- and gammaproteobacteria. Here, we show that disruption of abaM via a T26 insertion in A. baumannii strain AB5075 resulted in increased production of N-(3-hydroxydodecanoyl)-l-homoserine lactone and enhanced surface motility and biofilm formation. In contrast to the wild type and the abaI::T26 mutant, the virulence of the abaM::T26 mutant was completely attenuated in a Galleria mellonella infection model. Transcriptomic analysis of the abaM::T26 mutant revealed that AbaM differentially regulates at least 76 genes, including the csu pilus operon and the acinetin 505 lipopeptide biosynthetic operon, that are involved in surface adherence, biofilm formation and virulence. A comparison of the wild type, abaM::T26 and abaI::T26 transcriptomes, indicates that AbaM regulates \u223c21% of the QS regulon including the csu operon. Moreover, the QS genes (abaI and abaR) were among the most upregulated in the abaM::T26 mutant. A. baumanniilux-based abaM reporter gene fusions revealed that abaM expression is positively regulated by QS but negatively autoregulated. Overall, the data presented in this work demonstrates that AbaM plays a central role in regulating A. baumannii QS, virulence, surface motility, and biofilm formation.IMPORTANCEAcinetobacter baumannii is a multiantibiotic-resistant pathogen of global health care importance. Understanding Acinetobacter virulence gene regulation could aid the development of novel anti-infective strategies. In A. baumannii, the abaR and abaI genes that code for the receptor and synthase components of an N-acylhomoserine (AHL) lactone-dependent quorum sensing system (QS) are separated by abaM. Here, we show that although mutation of abaM increased AHL production, surface motility, and biofilm development, it resulted in the attenuation of virulence. AbaM was found to control both QS-dependent and QS-independent genes. The significance of this work lies in the identification of AbaM, an RsaM ortholog known to control virulence in plant pathogens, as a modulator of virulence in a human pathogen. Acinetobacter baumannii is a Gram-negative opportunistic nosocomial pathogen that causes a wide range of infections in humans, most commonly pneumonia but also bacteremia, skin, soft tissue, and urinary tract infections, meningitis, and endocarditis , pili, capsular polysaccharide, iron acquisition systems, outer membrane vesicles, secretion systems, and phospholipases (A. baumannii and related pathogenic Acinetobacter spp. possess a LuxR/LuxRI QS system consisting of an AHL synthase (AbaI) and a transcriptional regulator (AbaR) that is activated on binding an AHL. Most pathogenic Acinetobacter spp. produce AHLs with acyl side chains of 10 to 12 carbons in length with N-(3-hydroxydodecanoyl)-l-homoserine lactone (OHC12) being most commonly encountered. Many strains are however capable of producing other AHLs (\u2013Acinetobacter spp. is limited.One well-established mechanism of virulence gene regulation in diverse pathogens is quorum sensing (QS) . This ces (AHLs) . A. baumher AHLs . Severalher AHLs \u201320, whiler AHLs \u2013, 22. HowabaI gene in Acinetobacter there is an ortholog of the RsaM protein family. These are found in diverse beta- and gammaproteobacteria, including Burkholderia spp., Pseudomonas fuscovaginae, Halothiobacillus neapolitanus, and Acidithiobacillus ferrooxidans and a transcriptional regulator gene (abaR/ABUW_3774/ABUW_RS18375). Between abaR and abaI, a third gene is located which we term here abaM (ABUW_3775) . Despite the location of abaM adjacent to abaI and transcribed in the same direction, AbaM has only low (ca. 20 to 30%) sequence identity to orthologs present in Pseudomonas fuscovaginae and Burkholderia spp. (The genome of UW_3775) . The chrria spp. . Howeverria spp. .abaM::T26 and abaI::T26 mutant strains, respectively, was quantified via liquid chromatography-tandem mass spectrometry (LC-MS/MS) during growth under static conditions, since it appears to be enhanced by surface attachment in other A. baumannii strains (abaM::T26 mutant produced significantly greater amounts of OHC12 at each time point sampled (a difference between 100- and 875-fold) (N-(3-Hydroxydecanoyl)-l-homoserine (OHC10) was also detected at much lower concentrations in the abaM::T26 mutant throughout growth but only in the 24-h sample in the wild type (abaI::T26 samples (AHL production in the opaque variants of the AB5075 wild-type and strains . OHC12 ( strains was the 75-fold) . N-(3-Hyild type . No AHLs samples and C. TabaI::T26 mutant exhibited significantly reduced surface motility (36.5\u2009\u00b1\u20091.4\u2009mm), whereas the abaM::T26 mutant was significantly more motile (76.7\u2009\u00b1\u20092.4\u2009mm) .The surface motility of all three strains on 0.3% Eiken agar LS-LB plates was examined. Compared to the wild type (59.6\u2009\u00b1\u20090.7\u2009mm), the \u20092.4\u2009mm) . The proabaI::T26, and abaM::T26 strains to attach to abiotic surfaces was evaluated on propylene tubes. The abaM mutant formed \u223c3-fold more biofilm than did the wild type (abaI mutant (opaque variant) was not abaM and QS to AB5075 virulence was assessed using a G. mellonella larvae infection model .The contribution of on model . No diffabaM is a negative regulator of surface motility and biofilm formation and required for full virulence in G. mellonella.Overall, these results suggest that abaM::T26 mutant with the abaM gene in trans (pMQ_abaM) restored surface motility and biofilm formation and reduced both OHC12 and OHC10 production by approximately 50% . However, complementation of the abaM mutation did not restore abaM::T26 virulence to wild-type levels .Complementation of the A. baumannii AB5075, we performed transcriptomic profiling of AB5075 in comparison with the abaM::T26 and abaI::T26 mutants using RNA sequencing (RNA-seq), which was then validated for two key target genes via quantitative real-time PCR. For these analyses, we used total RNA extractions from cells grown for 18 h in static conditions when maximum OHC12 levels are produced by the abaM mutant \u2009\u2265\u20091] located immediately upstream of the csu operon, as well as genes coding for a flavohemoprotein, an uncharacterized transcriptional regulator, a thermonuclease, a sulfate permease, a toxic anion resistance protein, and seven hypothetical proteins (see Tables S3 and S4).Compared to the wild-type strain, 88 genes were upregulated and 9 were downregulated in the abaR and the AHL synthase abaI, were both upregulated in the abaM::T26 mutant, which also differentially expressed genes encoding proteins involved in the stress response, iron acquisition, diverse metabolism and energy production, chaperones, protein folding, and antibiotic resistance, e.g., class D beta-lactamase OXA-23 involved in resistance to carbapenems (see Table S4). Similarly, differentially regulated genes in the abaI mutant included diverse metabolic and energy production-related genes, as well as diverse genes coding for transcriptional regulators, stress response-related proteins, and membrane transport proteins (see Table S3).Moreover, some of the genes of the biosynthetic operon involved in the synthesis of acinetin 505, the QS transcriptional regulator csuA/B and the ABUW_3773 (the first gene of the acinetin 505 biosynthetic operon) genes genes . CompareabaM expression, an abaM promoter-luxCDABE operon fusion was constructed. This was introduced via a miniTn7 transposon into AB5075 and both abaI::T26 and abaM::T26 mutants, and the activity of the predicted promoter was measured by luminescence output in the presence or absence of OHC12. The activity of the abaM promoter significantly varied between the strains. In the abaM::T26, luminescence was approximately 40% higher than in the wild type, whereas the abaI::T26 mutant showed a 75% reduction compared to the parental strain , suggesting that, at least under the static growth conditions used in this study, AbaM exerts tight control over QS. This raises the question of when and under what conditions QS is active in AB5075, especially since the half-maximal responses for LuxR proteins activated by long-chain AHLs is in the 5 to 10\u2009nM range and the synthase (I) genes is a negative regulator of QS. Examples of three different classes of X include rsaL, rsaM, and mupX and BDGP (www.fruitfly.org), as well as our RNAseq data, all predict the presence of putative \u221210 and \u221235 regions for abaI and abaM, respectively, suggesting that these genes are not cotranscribed. Similar findings have been reported for rsaM1 and rsaM2 from Burkholderia thailandensis 68\u2009bp upstream of the start codon and 178\u2009bp downstream of the \u221235 and \u221210 promoter elements. This lux box is similar to those found upstream of abaI (CTGTAAATTCTTACAG) in both A. baumannii 5075 and A. nosocomialis M2. These results are consistent with an IFFL circuit, although further work will be required to fully characterize its properties and control of genes coregulated by QS and AbaM.Consistent with this model, andensis . ReverseA. baumanniiabaM mutant revealed enhanced surface motility and biofilm formation but reduced virulence compared to the wild type. Previous studies have shown that rsaM orthologues are required for full virulence in plants and QS-independent gene targets. A similar overlap has been noted for the RsaM regulon and its cognate QS system in P. fuscovaginae (abaI and abaM mutants compared to the wild type were those belonging to the csu operon (ABUW_1487-ABUW_1492/ABUW_RS07250-ABUW_RS07275). This operon encodes the proteins responsible for the synthesis of the Csu pilus, a type I chaperone-usher pilus involved in attachment and biofilm formation . In the abaI mutant, abaM was not differentially expressed at the late stationary time point chosen for the RNA-seq. Consequently, future work will be required to unravel these observations with respect to abaI and abaM regulation, particularly in the context of growth environment.For abaM and abaI mutations that result in either increased or no AHL production, respectively, on the expression of genes such as the csu cluster can be explained as follows. In an abaI mutant (no AHLs), abaM expression is reduced and hence csu expression is increased. In an abaM mutant csu expression is also increased since AbaM is absent lysogeny broth (LS-LB). OHC12 was synthesized as described previously were amplified by PCR using the primers listed in Table S2. The PCR fragments were digested with BamHI and KpnI and ligated in the multiple cloning site (MCS) of both pMQpMQ557M and introduced into abaM::T26 by electroporation. The stability of the vector pMQ557M and abaM complementing plasmid pMQ_abaM in both A5075 and the abaM mutant were confirmed by repeated daily subculture and plating out on LB agar with or without hygromycin (125\u2009\u03bcg/ml) to determine viable counts as CFU/ml .Plasmid pMQ557M (see Table S1) was obtained by digesting pMQ557 with PmlI (to remove the genes required for yeast replication) and religating the resulting large linear product. The abaR gene and the intergenic region between abaR and abaM (for the abaM fusion) or the region between abaR and the abaI (for the abaI fusion) were amplified by PCR and ligated in pGEM-T Easy using the pGEM-T Easy Vector System (Promega). The resulting plasmids and the promoterless luxCDABE operon were digested with KpnI and BamHI and ligated in order to introduce the lux operon downstream of the predicted promoter of abaM or abaI. These constructs were transferred into the MCS of the miniTn7T in pUC18T-miniTn7T-HygR plasmid (see Table S1) after digestion with NotI and PstI and ligation of the corresponding fragments.The 7T-based constructs were inserted into A. baumannii through four-parental conjugation. Briefly, phosphate-buffered saline-washed overnight cultures of the Escherichia coli DH5\u03b1 donor strain (containing pUC18T-mini-Tn7T_HygR_abaR_PabaM::lux or the pUC18T-mini-Tn7T_HygR_abaR_PabaI::lux), the E. coli DH5\u03b1 helper strain (containing pUX-B13), the E. coli DH5\u03b1 mobilizable strain (containing pRK600), and the A. baumannii recipient strain were mixed in a 1:1:1:1 ratio and grown on LB agar prior to counterselection with hygromycin (125\u2009\u03bcg/ml for miniTn7 selection) and gentamicin (100\u2009\u03bcg/ml).MiniTn7 transposon plate-reader over 24 h, and the optical density at 600 nm (OD600) and relative light units (RLUs) were recorded every 30\u2009min. When required, OHC12 was added at 200\u2009nM unless otherwise stated.The miniTnansposon carryingA585) was recorded.Strains to be tested were inoculated into 1.5-ml polypropylene microcentrifuge tubes in LS-LB with or without OHC12, followed by incubation under static conditions at 37\u00b0C for 24 h. Biofilms were quantified by staining with 0.25% crystal violet and extraction with ethanol, and the absorbance (Surface motility was quantified as previously described on LS-LBCell-free supernatants from cultures grown in LS-LB under static conditions at 37\u00b0C were sterile filtered and extracted with acidified ethyl acetate. Extracts were evaporated to dryness and subjected to LC-MS/MS as previously described .G. mellonella larvae (Trularv) were obtained from BioSystems Technology, Ltd., Devon, United Kingdom. Assays were performed as described previously prior to extracting total RNA using an RNeasy minikit (Qiagen). After treatment with DNA-free (Invitrogen), the absence of DNA contamination was confirmed using PCR, and the quality and quantity of the RNA samples was established using a 2100 Bioanalyzer (Agilent). Samples were sent for 150-bp paired-end sequencing via an Illumina platform and bioinformatic analysis to NovoGene . rpoB was used as endogenous control for normalization.Complementary DNA (cDNA) synthesis and qPCR were carried out using LunaScript RT Supermix and Luna Universal qPCR Master Mix (New England BioLabs), respectively. The oligonucleotides used for qPCR are listed in Table S2 and qPCR was carried out in triplicate using a 7500 real-time PCR system (Thermo Fisher). Negative controls lacking template or RNA incubated without reverse transcriptase were included. The housekeeping gene cDNA was amplified using Q5 high-fidelity polymerase (New England Biolabs) with specific primers annealing in the coding region of each gene. Genomic DNA, extracted with a DNeasy blood and tissue kit (Qiagen), was used as a positive control. The PCR products were run in a 1.5% agarose electrophoresis gel before imaging under UV light using a Gel Doc XR+ Imager (Bio-Rad).GSE151925.Bacterial sequencing data have been deposited in NCBI's Gene Expression Omnibus and are"} +{"text": "The Guangxi Zhuang Autonomous Region has one of the most abundant aquatic biodiversity in China, and it is a hotspot of global biodiversity research. In the present study, we explored the diversity, distribution, and biogeography of freshwater fishes in Guangxi. Our results showed that 380 species of freshwater fishes were recorded in Guangxi; the species diversity from northwest to southeast gradually decreased for most Sub\u2212basins; the spatial turnover component was the main contributor to beta diversity; the freshwater fish system belonged to the South China division in the Southeast Asiatic subregion of the Oriental region.G-F index, taxonomic diversity index, and beta diversity index. Results showed that 380 species of freshwater fishes were recorded in this region, which belonged to 158 genera in 43 families and 17 orders in 2 phyla, in which 128 species of endemic fishes and 83 species of cavefish accounted for 33.68% and 21.84%, respectively. The species diversity from northwest to southeast gradually decreased for most Sub\u2212basins. The G-F index has generally risen in recent years. The taxonomic diversity index showed that the freshwater fish taxonomic composition in Guangxi is uneven. The spatial turnover component was the main contributor to beta diversity. A cluster analysis showed that the 12 Sub\u2212basins in the study area could be divided into four groups, and the phylogenetic relationships of freshwater fishes in Guangxi generally reflect the connections between water systems and geological history. The freshwater fish system in Guangxi, which belonged to the South China division in the Southeast Asiatic subregion of the Oriental region, originated in the early Tertiary period. The results will provide the information needed for freshwater fish resource protection in Guangxi and a reference for promoting the normalization of fish diversity conservation in the Pearl River Basin and other basins.The Guangxi Zhuang Autonomous Region has the largest number of cavefish species in the world and is a global biodiversity hotspot. In this study, a species list of freshwater fishes in 12 Sub\u2212basins of Guangxi was compiled systematically. Moreover, the species composition and distribution of the diversity were analyzed via the G-F index (generic family diversity index) [Freshwater fishes are the most affected by human activities worldwide ,2, yet fy index) , and they index) can calcy index) . Beta diy index) . The spay index) . Therefoy index) ,14.The Guangxi Zhuang Autonomous Region (hereinafter referred to as Guangxi) is located at the southern border of China, and its rivers belong to four major basins spanning three climatic zones with well-developed karst topography . GuangxiSinohomaloptera kwangsiensis) of Guangxi in the 1930s to the publication of Freshwater Fishes of Guangxi (second edition) in 2006, a total of 290 species and subspecies of freshwater fishes in Guangxi were recorded [Cave Fishes of Guangxi, China [Fishes of the Pearl River [Investigation and Research on main River Fish Resources in Guangxi of the Pearl River Basin [From the first report on the new genus and species , and taxonomic difference variation index (\u039b+) were used to calculate freshwater fish diversity in Guangxi in 2006 and 2021 [The freshwater basins in Guangxi were divided into 12 Sub\u2212basins by combining topography, geomorphology, and administrative division . ArcMap and 2021 .G-F index) derived from the Shannon diversity index, has been successfully used to assess bird and mammal biodiversity [G-F index was calculated based on biodiversity values at the genus level (G index) and family level (Findex), as follows:The Genus Family index and the variation in taxonomic distinctness (\u039b+) are used to evaluate the distance between the taxonomy of the species and the distance of the hierarchical taxonomic tree, as follows:Taxonomic distinctness was proposed by Warwick and Clarke , in whicAverage taxonomic distinctness:ijw is the distinctness mass given to the path length linking species i and j in the hierarchical classification, and S is the total number of fish species in the survey.Variation in taxonomic distinctness:+ and \u039b+ were tested for departure from the expectation according to randomization tests. A randomization test with 10,000 random selections was used to detect the expected values of \u0394+ and \u039b+ derived from the species pool (master list), enabling us to test the significance of departure between the observed and expected values of the two indices. Such plots are described as confidence funnel plots, where degraded sites are assumed to fall below the lower 95% confidence limits, while reference sites should be located within the 95% confidence limits [+ and \u039b+ and the randomization test were computed using the TAXDTEST procedure in PRIMER 5.0 [In this work, the classification of fish was divided into five levels: class, order, family, genus, and species. The weights of the path lengths of the species belonging to the same phylum but not the same class; the same class but not the same order; the same family but not the same genus; and the same genus but not the same species were 83.333, 66.667, 50.000, 33.333, and 16.667, respectively [e limits . The calIMER 5.0 .sor), which was decomposed into species spatial turnover components (\u03b2sim) and nestedness components (\u03b2sne), as well as the Jaccard index (\u03b2jac), which was decomposed into species spatial turnover components (\u03b2jtu) and nestedness components (\u03b2jne).Beta diversity is represented by the differences in species compositions between different communities, determined by species turnover (species replacement) and nestedness (richness difference) . To quanS\u00f8rensen index:Jaccard index: A dataset covering all species recorded in each Sub\u2212basin was then constructed, and similarity analyses were carried out based on a logarithmic transformation for the number of each species in each Sub\u2212basin. Pairwise similarities among Sub\u2212basins were then computed to create a similarity coefficient matrix. The hierarchical cluster and the furthest-neighbor method with the Bray\u2013Curtis similarity were then used for the cluster analysis based on the matrix. The above analyses were calculated using PRIMER 5.0.p < 0.05). The above analyses were performed in R 4.1.0 using the BETAPART package [We performed Mantel tests and partial Mantel tests with 999 package and VEGA package .p < 0.05, A total of 380 freshwater fish species, belonging to 17 orders, 43 families, and 158 genera, have been recorded in the freshwater and estuarine areas of Guangxi. Among these, 128 endemic species, 83 species of cavefish, 49 threatened species, and 18 alien species were recorded, accounting for 33.68%, 21.84%, 12.89%, and 4.74% of the total freshwater fishes in Guangxi, respectively. The updated list added 94 newly recorded species, including 13 newly recorded alien species and 81 newly undescribed species. The natural distribution of native freshwater fish (estuarine fish and migratory fish were excluded) in Guangxi is 342 species, 129 genera, 19 families, and 5 orders . Cyprinip < 0.05). The LGR had the highest number of species (185), genera (96), and families (17), while BDR had the lowest .The number of freshwater fish species in each Sub\u2212basin decreased as the latitude decreased, generally showing a declining trend from northwest to southeast . The resSinocyclocheilus, Troglonectes, Triplophysa, Pseudobagrus, Microphysogobio, Acrossocheilus, Onychostoma, and Parabotia. Frequency of distribution, Schistura fasciolatus, Opsariichthys bidens, and Hemibarbus maculatus occurred in all 12 Sub\u2212basins, followed by Traccatichthys pulcher, Hemiculter leucisculus, Pseudohemiculter dispar, Hemibarbus labeo, Pseudorasbora parva, Squalidus argentatus, Acheilognathus tonkinensis, Puntius semifasciolatus, Onychostoma gerlachi, Cyprinus rubrofuscus, Carassius auratus, Silurus asotus, Pseudobagrus crassilabris, Hemibagrus guttatus, Mastacembelus armatus, Rhinogobius giurinus, Channa maculata, and Channa asiatica occurring in 11 Sub\u2212basins. The fewest are Paranemachilus genilepis, Sinocyclocheilus guilinensis, and Bagarius yarrelli, with 143 other species, occurring in only one Sub\u2212basin.The top 8 genera ranked by the number of fish species are Micronemacheilus, Yunnanilus, Lanlabeo, Pseudogyrinocheilus, Hongshuia, Discocheilus, and Paraprotomyzon; the genera distributed only in LGR are Stenorynchoacrum and Yaoshania; Tanichthys in XR only; Atrilinea in LR only; Zuojiangia, Sinigarra, and Prolixicheilus in ZR only; Parazacco, Pogobrama, and Anabacco in OR only, Pogobrama, Anabas, and Mystus in OR only; and lastly, Bagarius in BDR only.There are 110 genera of fishes in Guangxi, distributed in two or more water systems at the same time; the remaining 19 genera of fishes are distributed only in a single water system. The genera distributed only in HSR in Guangxi are Sinocyclocheilus is the largest group in Cyprinidae, with 34 species, followed by 14 species of Troglonectes in Nemacheilidae, 11 species of Triplophysa, 5 species of Heminoemacheilus, 5 species of Oreonectes, 3 species of Paranemachilus, 3 species of Protocobitis, 2 species of Schistura, and 1 species of Traccatichthys, Micronemacheilus, Yunnanilus, Bibarba, Parasinilabeo, and Xiurenbagrus. These cavefish are only distributed in 6 Sub\u2212basins, of which, 39 species in HSR account for 50.65% of the total species; 25 species in LR account for 32.47% of the total species; 7 species in LGR and 7 species in YR account for 9.09% of the total species. Five species in HR account for 6.49% of the total species and four species in ZR account for 5.19% of the total species.In addition, there are 83 species of cavefish in Guangxi belonging to 14 genera, 4 families, and 2 orders. The F index, G index, and G-F index in each Sub\u2212basin were calculated to all directions. In general, the F index (14.74) and G-F index (0.70) of freshwater fishes in 2021 were higher than those in 2006 (13.37 and 0.67), while the G index (4.43) in 2021 was lower than that in 2006 (4.47).The lculated . Compare+) for 2006 and 2021 ranged from 40.2 to 48.0 and 40.6\u201348.3, with theoretical mean values of 42.8 and 43.2, respectively (\u039b+) for 2006 and 2021 ranged from 333.1 to 484.2 and 381.2 to 480.9 and the theoretical mean values were 431.3 and 434.5, respectively were greater than the nestedness and richness difference components . BDR and HR had high \u03b2sor and \u03b2jac . Moreover, high spatial turnover and replacement components (0.40 \u00b1 0.10 and 0.57 \u00b1 0.10) in XZR and nestedness and richness difference components (0.28 \u00b1 0.18 and 0.31 \u00b1 0.18) in LGR in LGR were obsTraccatichthys taeniatus, Garra imberba, and Bagarius yarrelli were only distributed in this Sub\u2212basin of Guangxi. Group 2 was HJ and there was no endemic genus in this Sub\u2212basin. Group 3 included the OR, NLR, XR, YYR, YR, and ZR, among which, the OR and NLR of the southern rivers into the Beibu Gulf were clustered in one. ZR and YR were clustered in one; Paranemachilus was distributed only in ZR. YR, XR, and YYR of the main streams of the Xijiang River basin were clustered in one. Lastly, Group 4 included HSR, LR, LGR, and XZR; Acrossocheilus, Onychostoma, Microphysogobio, Sinocyclocheilus, Parasinilabeo, Heminoemacheilus, Oreonectes, Troglonectes, and Triplophysa were the main reasons for the difference in fish compositions between this group and other groups.The results of the cluster analysis of the species similarity in 12 Sub\u2212basins based on the Jaccard similarity coefficient showed that it could be divided into four groups when the similarity coefficient was 50% . Group 1p > 0.05) in 12 Sub\u2212basins. The correlation between \u03b2sor (\u03b2jac) and the difference in the average altitude was significant (p < 0.05). The correlation between \u03b2sim and the difference in the average precipitation was also significant (p < 0.05) .www.fishbase.org (accessed on 11 May 2022)), which is the second-highest among all provinces in China [Sinocyclocheilus, Hongshuia, Oreonectes, Troglonectes, and Triplophysa are the most strongly differentiated genera of fish species in the Yunnan\u2013Guizhou Plateau.The updated list of freshwater fishes in Guangxi contains 380 fish species, accounting for 23.53% of the number of freshwater fish species in China between countries or river systems, making freshwater fish invasion increasingly serious ,38. StudThe karst habitats within the territory of Guangxi are diverse and complex. With more underground rivers and lakes, karstic waters are endemic, providing special external conditions for fish. Poor light is a good example of such conditions, making cavefish specious-adapted. About 85% of cavefish, such as HSR, LR, and LGR, are distributed in northwest Guangxi, and genetic exchanges between them are practically difficult . TherefoG-F index of freshwater fishes in Guangxi (0.70) was higher than that in 2006 (0.67), indicating that the diversity of freshwater fishes in Guangxi at the family and family genus levels is becoming greater over time. Some studies have suggested that the G-F index is related to the number of species [DG index (4.43) was lower than that of 2006 (4.47), and the decrease was related to the increasing number of new species under the original genus of freshwater fishes in Guangxi in recent years, such as Sinocyclocheilus (17 new species), Triplophysa (8 new species), and Troglonectes (12 new species). While the DF index (14.74) was higher than that of 2006 (13.37), the rise was related to the addition and establishment of a single new genus of freshwater fishes, such as Tanichthys [Lanlabeo [Zuojiangia [Sinigarra [Stenorynchoacrum [G-F indices (0.64), which are consistent with the high number of taxonomic orders and species at the family and genus levels in these three Sub\u2212basins.The species , which inichthys , Lanlabef 2006 4., and theojiangia , Sinigarinigarra , Stenorychoacrum , etc. In+ and \u039b+ of freshwater fishes in 2021 (43.2 and 434.5) were higher than that in 2006 (42.8 and 431.5); the upward trend of \u0394+ indicates that fish species have become more distantly related to each other and taxonomic diversity is increasing. Moreover, the increase of \u039b+ underpinned that fish species in classification element distributions are more uneven. These were related to the establishment of new genera and newly recorded exotic fishes of the freshwater fishes in Guangxi in recent years. It is generally accepted that in communities of equal species compositions, the biodiversity of communities with species belonging to multiple genera is higher than that of belonging to one genus [The \u0394ne genus . In addine genus . A largene genus . Its warne genus ,41. Morene genus ; for exane genus , which fne genus ,38.+ values in 2021, which means the classification levels of species in these Sub\u2212basins are relatively high. The \u0394+ values of YR and NLJ were below the average of the funnel in 2021, indicating that the fish population structure in these sub-watersheds is relatively simple, which may be due to the fact that the homogeneous habitat (no karst landform was formed) of these Sub\u2212basins is unable to provide conditions for freshwater fish differentiation, which greatly affects the fish species composition and taxonomic diversity [Meanwhile, OR and XZR showed higher \u0394iversity .The 12 Sub\u2212basins were divided into four groups based on Jaccard\u2019s fish similarity coefficient, including: BDR of the Red River system; HJ without endemic genera; OR, NLR, XR, YYR, YR, and ZR; and lastly HSR, LR, LGR, and XZR. The occurrence of these groupings may be associated with geological events, such as plate tectonic movements . These mParabotia parva, Parazacco fasciatus, Pogobrama barbatula, Xenocyprioides parvulus, Squalidus atromaculatus, Anabas testudineus, and Channa nox. Therefore, the rivers that flow into the Beibu Gulf (OR and NLR) are clustered in one. With the continuation of neotectonic movements, the Guangxi basin first rose extensively; however, it was still influenced by the rising of the Yunnan\u2013Guizhou plateau, so the basin in the northwestern part of Guangxi rose more. As a result, the horizontal plane from northwest to southeast slowly inclined, and gradually formed a continuous river system [Sinocyclocheilus, Heminoemacheilus, Oreonectes, Troglonectes, and Triplophysa were adaptively evolving, leading to the main genus of fish composition differences between these and other groups. Because of the low base and deep depression of the Xijiang Valley, the HSR, LR, and LGR were attracted toward converging into the Xijiang River [The main section of the Xijiang River was developed by the Yanshan movement in the late Mesozoic when it was not connected with the water system of the Guangxi Basin. By the Tertiary Himalaya movement, the Yunnan\u2013Guizhou Plateau rose and the Xijiang Valley (now YYR) sank . These, r system . Hence, r system . Whereafr system . Meanwhir system . At thisng River . OverallCoreius, Ancherythroculter, and Pseudobrama), and the endemic species of the Yangtze River [Chanodichthys oxycephaloides, Microphysogobio tungtingensis, Rectoris luxiensis, Sinilabeo tungting, Leptobotia tchangi, Leptobotia tientai Leptobotia tientaiensis hansuiensis, Parabotia banarescui, Lepturichthys fimbriata, and Sinogastromyzon hsiashiensis, do not have distribution in the Xiangjiang River and Zishui River in Guangxi [The freshwater fish system of China belongs to the Oriental region and Holarctic region, while Guangxi entirely belongs to the Oriental region ,55. Chenze River . Meanwhi Guangxi . This in Guangxi .Therefore, the fish system of Xiangjiang River and Zishui River in Guangxi should be assigned to the South China division under the Southeast Asiatic subregion. Hence, the fish system of the Guangxi water system is classified as the South China division under the Southeast Asiatic subregion of the Oriental region.Knowledge of beta diversity patterns can go beyond the systematic conservation planning method, which only considers the location of a protected area to natural, physical, and biological patterns ,57. The Furthermore, due to the lack of awareness about Guangxi\u2019s freshwater ecosystem , the freshwater ecosystem in Guangxi is facing ecological and environmental problems, such as habitat fragmentation, habitat loss, fish migration channel barrier, biological invasion, etc., due to the construction of water conservancy projects and overfishing in recent years ,40,41. TTherefore, attention should be paid to the restoration of the aquatic ecosystem in northwestern Guangxi , and the monitoring and protection of freshwater fish in this area should be strengthened. Obviously, there is a great deal of basic research work to (urgently) be done to establish the situation and evaluation grade of the cave fishes in Guangxi. To determine a protected species, the first step is to complete its biological survey, master its population number, age composition, population structure, growth, fecundity, breeding period, etc., and then design, organize, and implement conservation plans.The following general findings were generated from the present study: (1) There are 380 species of freshwater fish in Guangxi, these include 342 native freshwater fish, 128 endemic species, 83 species of cavefish, 49 threatened species, and 18 alien species. The number of freshwater fish species in each Sub\u2212basin generally showed a declining trend from northwest to southeast. (2) Turnover and replacement components brought larger contributions to beta diversity in Guangxi. (3) The freshwater fish system in Guangxi belongs to the South China division in the Southeast Asiatic subregion of the Oriental region. The 12 Sub\u2212basins in Guangxi could be divided into four groups, and the phylogenetic relationships of freshwater fishes in this region generally reflect the connections between water systems and geological history."} +{"text": "The field of science communication has grown considerably over the past decade, and so have the number of scientific writings on what science communication is and how it should be practiced. The multitude of theoretisations and models has led to a lack of clarity in defining science communication, and to a highly popularised\u2014and theorised\u2014rhetorical shift from deficit to dialogue and participation. With this study, we aim to remediate the absence of research into what science communication is, for scientists themselves. We also investigate whether the transition towards dialogue and participation is reflected in the goals scientists identify as important to their science communication efforts, both in a general and a social media context. For this, we analyse survey data collected from scientists in the Netherlands using thematic qualitative analysis and statistical analysis. Our results reveal six main dimensions of science communication as defined by our respondents. The 584 definitions we analyse demonstrate a focus on a one-way process of transmission and translation of scientific results and their impacts towards a lay audience, via mostly traditional media channels, with the goals of making science more accessible, of educating audiences, and of raising awareness about science. In terms of the goals identified as most important by scientists in the Netherlands, we find goals aligned with the deficit and dialogue models of science communication to be the most important. Overall, our findings suggest we should be cautious in the face of recent claims that we live in a new era of dialogue, transparency, and participation in the realm of science communication. In recent years, science communication has received increasing attention from politics and society, and it has developed into a vital part of academic activity . As scieAs a growing area of practice and research, science communication over the last 50 years has developed into a field primarily concerned with answering questions related to science and society, science in the media, and the role of science journalists . In the purposes of science communication practice\u201d is defined as the use of appropriate skills, media, activities, and dialogue to produce one or more of the following personal responses to science : Awareness, Enjoyment, Interest, Opinion-forming, and Understanding.\u201dThe definitions exemplified above are by no means the only ones nor do they include all involved actors, beyond scientists and the lay public and all societal contexts . However, an exhaustive review of all the science communication definitions is beyond the scope of this article. Rather than attempt to further portray the breadth of science communication definitions, we would like to highlight the importance of exploring scientists\u2019 personal science communication definitions. While some studies have attempted to understand scientists\u2019 choices and objectives using the Theory of Planned Behavior and the A survey conducted in the UK by Illingworth and colleagues further Empirical assessment departing from the scientists\u2019 personal science communication definitions can foster a broader understanding and deeper insights , 17 intoThe scientific community has often been primarily focused on disseminating information and building public knowledge about science \u201324 and oOver the last decades, a rhetorical shift has taken place in the area of science communication in that calls for dialogue between scientists and non-scientists, as well as calls for more participatory approaches to science communication, have taken precedence over the scientific literacy deficits rhetoric. Together with this rhetorical shift from \u2018deficit\u2019 towards \u2018dialogue\u2019 and \u2018participation\u2019, scholars have developed a number of science communication models ranging from transmission of information to dialogue and public engagement. Such models are said to be frameworks for understanding what the \u201cproblem\u201d is, how to measure the problem, and how to address the problem , p.13. MHistorically, science communication had been described as a process of information transmission, which assumed \u201cpublic deficiency, but scientific sufficiency\u201d . What haThe perspective of the deficit model is perfectly exemplified by the UK Chief Scientific Adviser to the House of Lords, in a report published in 2000 which purports \u201cdifficulties in the relationship between science and society are due entirely to ignorance on the part of the public\u201d and \u201cwith enough public-understanding activity, the public can be brought to greater knowledge, whereupon all will be well.\u201d , p.25. TScholars have theorised that the condescending claims of the public\u2019s ignorance under the deficit model and the The dialogue model is characterised by three main features: (1) engagement in a dialogue with the public to help explain the science , (2) lisThe emergence of the dialogue model, together with the increasing demand for scientists\u2019 involvement in public discussions and with policy documents shifting their language from \u2018communication\u2019 to \u2018dialogue,\u2019 have created a powerful narrative in which this type of science communication is portrayed as inherently superior . HoweverIn recent years, the narrative shift has evolved even further to include public participation or public engagement. Public participation (or public engagement) models have emerged as a direct attempt to enhance social trust in science policy. These models focus on a series of activities\u2014consensus conferences, citizen juries, deliberative technology assessments, science shops, deliberative polling, etc. , 56\u2014drivParticipation models signal a more obvious shift in power than the dialogue model and emphasise the role of the public and other societal stakeholders in reflecting upon, sharing knowledge about, creating new knowledge, and making decisions about science that affects society , 10. ParScholars have posed a number of criticisms related to participation models due to their focus on the process of science over any substantive content, their limitations in terms of the numbers of people they serve, and for sometimes exhibiting an \u2018anti-science\u2019 bias given their focus on lay/local knowledge over scientific knowledge . The strThe three science communication models discussed above, driven by different assumptions, provide only schematic tools for theorising about science communication activities. While the deficit model centres around filling a knowledge vacuum and the dialogue model on an exchange of perspectives between the sciences and the public, the participatory model centres on the \u2018democratisation of science\u2019 through some form of empowerment and political engagement of the public. The emergence of these models (and many others) together with the strong rhetorical shift from deficit to dialogue and participation\u2014both in scholarly and policy discourses\u2014seem to indicate a unidirectional transition process. However, Bucchi and Trench suggest While the radical transition\u2014from deficit to dialogue and participation\u2014and the concurrent rhetorical shift have been deemed highly implausible over such a short period of time , 46, verIn this article, we aim to address this research gap by investigating the goals scientists focus on in their science communication practices and the alignment of these goals with the three models discussed thus far. To do so, we build on the work of Metcalfe that lisWhen discussing contemporary science communication practices, one must not omit to highlight the radical and important changes brought about by the emergence of social media channels. On a daily basis, millions of people all over the world are constantly consuming and creating content through social media platforms. Considering the popularity of such platforms , it is easy to see that information disseminated through these channels can reach millions in a matter of minutes and, as Van Eperen and Marincola acknowleBeyond reaching engaged and diverse publics\u2014researchers, the general public, government, and all other stakeholders\u2014social media platforms have the potential to enable multi-vocal, multi-way communication . Such plWhen it comes to science communication, although social media seems to be the ideal environment for two-way and multi-vocal communication models, the few studies that have investigated online science communication have found that this is only one part of the story. science communication practitioners and science organisations use social media for one-way message dissemination more often than they truly engage with their publics , 71, andJust as in the case of the proposed radical transition of science communication\u2014from deficit to dialogue and participation\u2014the belief that social media affordances foster unprecedented opportunities for two-way and multi-vocal science communication is becoming more or less an accepted fact. But, as the few studies that have investigated social media use for science communication have shown \u201375, one-Considering the promises of social media as a participatory environment where already-engaged audiences can easily be reached, we also aim to investigate whether those actively using social media for their science communication efforts may approach science communication from a more participatory oriented perspective. To do so, we aim to compare between social media users and non-users and their general science communication goals.In sum, the aims of our work are four fold. First, we aim to contribute to a broader understanding and deeper insights into what science communication is, beyond conceptual definitions proposed by literature. Moving past the unnecessarily narrow and rigid science communication definitions proposed by various scholars, we aim to grasp the perspectives of some of the core actors in the field, scientists themselves. Second, we set to investigate whether the highly popularised\u2014and theorised\u2014rhetorical shift from deficit to participation is reflected in the goals scientists identify as important to their science communication efforts. In other words, we aim to understand the scientists\u2019 goals and which of the models are at work in their science communication activities. Third, we aim to provide a stepping stone towards a better understanding of why a deficit-model way of thinking remains dominant in social media science communication efforts. Lastly, we aim to uncover whether those actively using social media for their science communication efforts may approach science communication from a more dialogue or participatory oriented perspective, rather than the deficit perspective shown by previous studies.The survey data for this study were collected in accordance to the guidelines of the Association of Universities The Netherlands and ethiParticipants in our survey gave informed consent before their answers were recorded. The first page of our survey provided a full explanation of the scope and aims of our study and it informed respondents that participation is anonymous and voluntary. Furthermore, at the end of the survey, respondents were informed that by clicking the submit button they consent for their answers to be included in our study and that they could withdraw from the study at any time. To ensure the anonymity of the respondents, personal identifiers such as name, e-mail address, physical address, and organization name were not collected. Our respondents had the opportunity to provide us with their social media information for a later stage of this project and to leave an email address for a chance at winning a gift card. Their social media information and email addresses are stored separately from their survey responses and will not be included in any reports using these survey data.The data included in this study comes from a large scale survey we conducted in the Netherlands between April 1st and May 31st 2021. The survey was specifically addressed to researchers, at any career level, working in any public research or technical university or research institute in the Netherlands.Our survey was disseminated via multiple channels in an attempt to collect a representative sample for researchers in the Netherlands. Invitations to participate, which included an anonymous link or a QR-code, were placed in several university-based magazines (online and in print), newsletters, and intranet pages. We also posted similar invitations on Twitter, Facebook, and LinkedIn. Lastly, for every university in the Netherlands, the survey was disseminated via email, based on an email address list generated using the public online directories provided on each university\u2019s websites, for each of their faculties. Alongside the universities, five publicly funded research institutes were also targeted.While our email address list included over 20,000 emails, it did not capture those scientists whose contact information may not have been provided online or was not updated in the university contact directories. In order to limit inconveniencing our potential respondents, no reminder emails were sent after the first invitation email. Of the total of 584 respondents who completed our survey, 2 of our respondents accessed the survey via the QR-code, 10 via social media links, and 572 via email invitation links. While a precise response rate cannot be calculated, the completion rate for those who started the survey was 59.65%.N = 584), while the remaining 15 focused on science communication in the context of social media. These 15 questions were answered only by those participants who indicated that they use social media for science communication purposes (N = 314).The full survey\u2014comprised of 32 questions delivered with a mixed-method approach (i.e. open-ended and closed)\u2014asked participants a range of questions related to their understanding and involvement in science communication activities. The first 17 questions in the survey, addressing general science communication issues, were answered by all our participants on what science communication is. In answering this question, participants were instructed to use either sentences or keywords to tell us what their personal definition of science communication is.To analyze the ways in which our respondents define science communication, we adopted semi-inductive, qualitative thematic analysis. A widely used method in qualitative research, thematic analysis is a method to identify, analyze, and report patterns found in data . While tFollowing the thematic analysis phases described by Braun and Clarke , the firLastly, the two authors discussed and reviewed the initial themes to insure clear and identifiable distinctions between them as well as meaningful coherence for each theme. To further confirm the reliability of the themes, Holsti\u2019s index was calculated for each of them: Type of communication (87.7%), Audience (93.2%), Content (87.0%), Media (96.4%), Goals (77.2%), and Impact (79.2%). After this review, all the six themes initially identified were kept and included in our analysis.To assess the goals respondents find most important to their science communication efforts and how these goals align with the conceptualisations proposed by the deficit, dialogue and participation models, we used a 28 item scale. Participants were asked to rate the importance of the 28 items to their science communication goals on a 5-point Likert-type scale, ranging from 1 = \u2018Not important at all.\u201d to 5 = \u201cExtremely important.\u201d This question appeared in the survey after respondents were asked to defined science communication in their own words.\u03b1 = .925) and also divided according to the three models form reliable scales = 7762.743, p = < .000) and the Kaiser\u2013Meyer\u2013Olkin measure of sampling adequacy (KMO = .913). Next, factor loadings were assessed for each of the 28 items and three items with factor loadings less than the acceptable threshold of 0.30 [\u03c72/df) [To verify if the 28-items in our scale align to the three science communication models, as proposed by the literature review conducted by Metcalfe , we used of 0.30 were rem [\u03c72/df) , the com [\u03c72/df) , the goo [\u03c72/df) , the sta [\u03c72/df) , and the [\u03c72/df) .N = 584), it is within expectation that the model did not pass the \u03c72 test (p < .05) [\u03b1 = .918) and as three separate scales aligned with each of the three science communication models: Deficit model Cronbach\u2019s \u03b1 = .830, Dialogue model Cronbach\u2019s \u03b1 = .817, and Participation model Cronbach\u2019s \u03b1 = .855.Our results confirm the unidimensionality of each construct in our model and indicate that the measurement structure of three factors and 25 items produced good fit statistics see . As the p < .05) \u201389. LastIn our Results section, we explore these 25 general science communication goals individually as well as partitioned according to the three science communication models they represent, as confirmed by the factor analysis reported above. For the latter, we computed two aggregate variables for each of the three latent variables/models : 1) the unit weighted scale means and 2) factor scores using factor score weights produced by the CFA, normalized to allow for comparison across models \u201392. Whil\u03b1 = .846). Although the items in this scale were specifically worded for the social media context, the question asked respondents to rate the importance of these items for their social media science communication activities, just as in the case of their general science communication goals. The nine items aligned to the deficit model (1-3), the dialogue model (4-6), and the participation model (7-9) and they form reliable scales = 963.682, p = < .000) and the Kaiser\u2013Meyer\u2013Olkin measure of sampling adequacy (KMO = .872). Next, and unlike in the case of the general science communication goals, no items were removed from the three-factor model due to strong factor loadings, ranging from.60 to.76. Using the same model-fit evaluation measurements , we confirmed the unidimensionality of each construct in our model. Hence, a three factor model with 9 items produced good fit statistics also shows a statistically significant departure from normality, as shown by the Shapiro-Wilk tests in Before describing our findings, let us first describe the demographics and characteristics of our survey respondents. 54.6% of our respondents identified as female, 43.2% as male, and 0.9% as non-binary/third gender, while 1.4% preferred not to say. Most of the respondents in our data set held a PhD (58.6%) or a Masters\u2019 (38.9%) degree, with 77.1% being employed under a full time contract. Interestingly, as per In the following subsections, we discuss our findings regarding our respondents\u2019 personal science communication definitions, their general science communication goals, their social media science communication goals, and how these goals align with the three science communication models discussed in the introduction of this paper, namely the deficit, dialogue, and participation model.The thematic content analysis conducted on the 584 definitions provided by our survey respondents resulted in 30 codes and 6 themes. These codes and themes highlight the various science communication aspects, dimensions, and perspectives our respondents highlighted in their definitions. In N = 620), the audience addressed (N = 511), the content of the communication (N = 389), the goals to be achieved (N = 148), the medium through which the communication takes place (N = 57), and the impact of science communication (N = 30). We will discuss each of these themes and their most prominent codes individually.Overall, our respondents highlighted six main dimensions of science communication, as shown by the themes that emerged from their definitions. These themes center on the type of communication , a good proportion also included academic audiences in their definitions (23.29%), while very few made reference to interested (1.20%) or uninformed audiences (0.86%). In sum, when defining science communication, our respondents identify academic as well as non-academic stakeholders as main audiences for their science communication efforts. 111 (19.01%) definitions included both, academic and non-academic audiences. As we will discuss later, these definitions may be indicative of a dichotomy, a separation between scientists and other members of the public, which has further implications.The third most common theme emerging from these personal definitions of science communication was the content of their communication efforts. Here, most respondents emphasized research findings or results as the most common dimension of their definition (42.81%). Fewer respondents made mention of involving their audience in the process of science (11.30%) or of communicating the relevance (7.02%) and methods (5.48%) of scientific research. Considering our earlier finding\u2014that most personal definitions we investigated focused on a one-way, information transmission model of science communication\u2014it then come as no surprise that the most predominant content mentioned was research results. This would suggest that our respondents look at science communication as a process of transferring a ready-made product of science (results) to their audiences, whether academic or not.Our analysis also uncovered a number of goals highlighted in the science communication definitions provided by our respondents. The most prominent of these goals was that of making science more accessible (5.48%), educating their audience (4.11%), raising awareness about science and scientific topics (3.94%), and enhancing interest in science (3.60%). The other six goals emerging in our analysis see were farThe fifth theme uncovered by our analysis was that of the medium through which science communication takes place from the perspective of our respondents. Here, 5.82% definitions mentioned traditional media as the primary channel for science communication, while 3.94% made mention of digital and social media as science communication platforms. This suggests that among those who include a means or channel of communication in their definitions, the majority made reference to more traditional forms of media, and fewer see digital and social media as a science communication venue.The last and least prominent theme that emerged from our analysis was that of impact. The definitions formulated by our respondents included communicating about the general (2.57%), societal (1.88%) and practical (0.68%) impact of their research.To summarize, the qualitative thematic analysis of our respondents\u2019 personal science communication definitions aimed at creating a broader understanding of what science communication is, from the perspective of those who are closest to the scene. While not trying to discount the very few personal definitions that have included aspects of dialogue-based and participatory science communication, our results suggest that the majority of these definitions are centered around a deficit way of thinking by emphasizing a one-way, transmission model of communication that aims to transfer and translate research outcomes and its impacts to a non-academic audience, with the purpose of making science more accessible, of educating audiences, and/or of raising awareness about science.Having investigated how our participants define science communication, in the next sections of our paper we quantitatively analyze our respondent\u2019s general science communication goals, as well as the goals they find most important when specifically asked about social media science communication. Furthermore, we discuss how these goals aligned with the deficit, dialogue, and participation models of science communication.We start the description of our quantitative analysis results with the ways in which our respondents rated the importance of the 25 general science communication goals and the science communication models they represent. Our results indicate that, on average, our respondents rated deficit model goals as more important than dialogue model goals and participatory model goals. In M = 3.81) and dialogue model goals (M = 3.66) as more important than participation model goals (M = 3.38). A Wilcoxon signed-rank test of the factor scores further confirm that our participants rated Deficit model goals higher than Dialogue model goals and that this difference is statistically significant: T = 133157;z = \u221212.530;p < .001. We also find that on average, our participants rated Deficit model goals and Dialogue model goals significantly higher than Participation model goals. The same significance levels are also found between pairs of models when testing the scale means rather than the factor scores (p < .001 for each pair). Thus, we can infer that in terms of science communication goals, our respondents focused more on a Deficit and Dialogue perspective, rather than a participatory approach.The means of each model\u2014as already hinted at by our qualitative results and by looking at the highest rated goals\u2014indicate that our respondents did indeed rate deficit model reported that they use social media for science communication. Our 314 participants who said they use social media for science communication reported that they used LinkedIn (38.9%), Twitter (33.4%), Facebook (15.8%), and YouTube (0.9%) for science communication in the past 12 months, and also other less frequently used platforms . All of these most commonly used platforms among our respondents offer affordances that can be used to enter a dialogue with or involve already-engaged audiences into participatory science communication activities. Thus, in the next paragraphs, we assess what type of goals those participants who use social media for science communication rated as most important.M = 4.04) followed by Dialogue model (M = 3.90), while Participation model goals were rated lowest (M = 3.47). To further confirm the significance of this finding, we performed Wilcoxon signed-rank tests on the factor scores of our model variables. The results confirm that, on average, our participants rated Deficit model goals significantly higher than Dialogue model goals, T = 31957.00, z = \u22128.025, p < .001, and significantly higher than Participation model goals, T = 36779, z = \u221211.452, p < .001. The tests also show that, on average, Dialogue model goals were rated significantly higher than Participation model goals, T = 38018.50, z = \u221212.332, p < .001. Significance levels are found to be similar when using means instead of factor scores as well. Thus, when specifically asked about their science communication goals in the context of social media, and even though social media promises to be an ideal environment for dialogue and participation, our respondents\u2019 focus remained mostly on the deficit-oriented perspective.When looking at how our respondents rated their agreement to the 9 statements related to social media science communication goals see , we founN = 314) rate their general science communication goals, when compared to those who reported not using social media for science communication (N = 270). Overall, the factor score means listed in p = .124).Additionally, our analysis found statistically significant differences in how respondents who reported using social media for science communication . Previous studies discuss a shift in the culture surrounding more participatory approaches to science communication to be driven by early-career scientists . HoweverThe demographics collected in our survey did not include information about the affiliations of our respondents. Although this particular choice in the survey design offered respondents more anonymity, our results cannot speak to differences between scientists affiliated with different universities in the Netherlands, nor can they make distinctions between scientists from different disciplines. Previous studies have found that scientists\u2019 academic fields play a role in their approaches to science communication. For example, Burchell found that participatory approaches to science communication was considerably higher in fields like the arts, humanities, and social sciences compared to the STEM fields . SimilarOur study builds further evidence that a deficit model approach to science communication is still very much present , 99, 117Understanding that scientists are focused mostly on dissemination and translation of research results, with a willingness to make themselves and science more accessible, has important consequences for scientific organizations, the shaping of their long-term science communication strategies, the development of training programs for established scientists, and the inclusion of science communication curricula in graduate education programs. As Madden, Cacciatore, and Yeo observe,Lastly, the abundance of science communication models proposed by many scholars over the years could not all be addressed in our work. For example, one established model we did not address is the contextual model . This moTo conclude, science communication and public engagement can no longer be focused on persuading the public on the \u2018facts\u2019 of science, as we live in an era where most science debates are no longer small, localized events, controlled by scientists or science communicators. As such, and as Nisbet and Scheufele also pro"} +{"text": "Geoscience and geodiversity, two sides of the same coin, deal with very poor social visibility and recognition. Ensuring the protection of geodiversity is not only in the geoscientists\u2019 hands and all of society needs to be involved. Therefore, public engagement with geodiversity demands new solutions and a change of paradigm in geoscience communication. Most of the science communication activities undertaken by geoscientists, even when they use modern approaches and technologies, are mainly designed based on empirical experience, laid on didactical approaches and assuming the public\u2019s knowledge deficit. In order to engage the society with geodiversity, it is not enough to focus on scientific literacy and deficit models in which lack of knowledge is considered to be the main obstacle between science and society. It is fundamental to establish a commitment between society and science based on dialogue where lay public is not seen anymore as a single entity with a knowledge deficit. Non-experts must become also protagonists in scientific decisions with social impact and integrate their knowledge and concerns in public participation and decision-making. Engagement with geoscience and geodiversity would benefit from more effective and targeted communication strategies, with different approaches to engage with communities, local stakeholders, media, students and teachers, scientific community, tourists, politicians or policy-makers, and groups with different concerns and distinct relations with science. In the last 20\u00a0years, science communication research has made many relevant contributions in order to promote more participatory processes with which society is asked to engage. Regarding geoscience communication as a discipline, it is a very recent Earth science branch that also incorporates social science, behavioral science, and science communication, but still lacks a clear and formal definition. This study provides a comprehensive review of the literature in order to develop a conceptual framework for geoscience communication research, identifying the main challenges and opportunities. Despite the progressive recognition of the importance of science in solving the main sustainability challenges of the planet Earth, geosciences are most of the time neglected Brilha .The impact of human activities and the vulnerability of humankind to natural hazards is increasing as the world population keeps growing , promoted jointly by the International Union of Geological Sciences (IUGS) and UNESCO. This worldwide endeavour had as an ultimate goal to raise public and political awareness for the Earth science potential for improving the quality of life and safeguarding the planet to provide an overview of the conceptual framework and current state-of-the-art, (ii) to systematize the concepts and models, discussing opportunities for the geosciences, (iii) to identify communication agents and practices, and (iv) to reflect about geoscience communication challenges, main debates, and gaps in the research agenda. Beginning with an outline of communication science models and current understandings of the needed relationship with the public, the article then builds on the emergence of the geoscience communication field and demonstration of its role regarding, geodiversity, nature conservation, environmental protection, and the planet sustainability. Examples of geoconservation communication current practices and of approaches engaging with geodiversity are presented. The article continues with the identification of existing challenges to science communication and finally discusses some pitfalls in the field and suggests guidelines for action.Communication between the scientific community and the public dates back to the seventeenth century. A debate about naming the Devonian geological period, in the 1830s and 1840s, occurred in pages of the popular magazine \u201cThe Athenaeum,\u201d including scientific arguments with detailed stratigraphic sections Rudwick . Even soThe term \u201cscience communication\u201d is often used as a synonym for different related concepts, such as \u201cscientific literacy,\u201d \u201cpopularization,\u201d \u201cinformal education,\u201d \u201cpublic understanding of science,\u201d or \u201cpublic awareness of science.\u201d They are not synonymous, and although they have compatible goals and fine boundaries, each one reflects specific contexts and approaches , ;Geoscience communication is more than effective transfer of information from scientist to audience (\u201chow to listen\u201d).In the same year, the conference \u201cCommunicating Geoscience: Building Public Interest and Promoting Inclusive Dialogue\u201d, held at the Geological Society of London, discussed some of the main concerns on the topic, illustrating the current challenges of geosciences communication brought discussions to the public about the geological record and fossil evidence as testimonies of his theory of evolution. The book was written for a non-specialist public, and it is one of the most influential books throughout the history of science. In the eighteenth and nineteenth centuries, literature and art in general used geological landscapes, not only as source of inspiration, but also as a tool to provoke the sense of wonder and engage wider audiences initiative. The ELS initiative produced the Earth Science Literacy Principles (ESLPs), a document that outlines what citizens should know about Earth science , the Millennium Ecosystem Assessment , the UniUnder the scope of nature conservation, geodiversity has always occupied a secondary place, when compared to biodiversity as also the promotion of geodiversity\u2019s benefits in delivering ecosystem services has been neglected Gray . For thiDespite this very low public awareness, society benefits directly and indirectly from geodiversity. However, to reach effective sustainable development, geodiversity values need to be broadly understood and protected ; communication with lay audiences is indeed a priority. However, most of these communication activities, as we have demonstrated in the literature review, even when using modern approaches and technologies, are designed based on empirical experience and assuming a public\u2019s knowledge deficit. On one hand, geoscientists see the non-expert publics as students with insufficient knowledge and, on the other hand, geoscience communication strategies are often developed as educational activities , using common strategies and tools, as shown, for example, by the approaches of Mariotto and Venturini , Mansur The communication paradigm for geoscience needs to be rethought and changed, from singular focus on public knowledge deficits as the guilty part, the solution for conflicts over science in society. It needs new ways of conversation and engagement that recognize, respect, and incorporate differences in knowledge, values, perspectives and goals, as it has been foreseen by science communication scholars, such as Nisbet and Scheufele . A comprIn re-purposing geoscience communications, Stewart and Hurth propose Geoscientists, stakeholders, social scientists, media, and society must establish the boundaries of their roles for an ethical communication Solarino , and thiIn some specific contexts, such as geohazards, engagement strategies are being gradually adopted. Under the umbrella of the \u201cInternational Strategy for Disaster Reduction\u201d from United Nations, several national and international organizations and initiatives have been working for risk reduction, focused on a global approach, engaging individuals, and communities in how to manage hazards impacts (Eder et al. Geoconservation practitioners, researchers, and all the geoscience community need to reflect and understand why geodiversity has so little public visibility and attention. Engagement with geodiversity and geoconservation forces to define targets and address specific strategies to specific audiences, understanding that \u201clay public\u201d is not a single entity. As established in the literature on science communication, there are several publics, not only with different levels of knowledge, but mainly different groups with their own needs, interests, and cultural, social, and economic contexts (Costa et al. The establishment of geoscience communication as a research discipline can benefit from following the methodologies and research agendas used by science communication scholars. With some exceptions, the geoscientific community tends to develop communication activities without logistic, financial, and technical support, based on empirical experience. In order to overcome this widespread limitation, cooperation with social scientists could help in the training of communication skills, in planning strategies to specific goals and audiences, and in dealing with media and with the challenging demands of a participatory model of science.Geoscience communication as a discipline needs a better formalization, regarding its interdisciplinary approach and its specific challenges. In addition, the diversity and ambiguity of terms in the literature to refer science communication also reveals the immaturity of the topic. Communication strategies can put geosciences topics on the media and public agenda. To achieve this goal, communication practices must be targeted and change to an effective engagement paradigm, where society recognizes the importance of geoscience, geodiversity, and ecosystem services in planet Earth sustainability, and participate in their protection and management. Geoscience students and professionals would benefit from specific training on communication skills as well as on public engagement.The comprehensive review of the literature contributed to identify the main challenges of geoscience communication. Most of the research and literature focus on specific activities and experiences, showing that geoscientists\u2019 practices are far from science communication research and trends. Thus, to better understand the obstacles to geoscience communication, it is necessary to analyze the agents of geoscience communication. A close insight into geoscientists\u2019 experiences and practices, as well as their motivations and perceptions, could better inform training policies and communication strategies and impact expectations. This further research may give us clues on how to improve geoscience communication, making it more frequent and more effective."} +{"text": "Uterine scar dehiscence or rupture is a rare but potentially life-threatening complication of caesarean delivery. It is the opening of the uterine incision line that can lead to postpartum haemorrhage, pelvic hematoma, pelvic abscess, endomyometritis, generalised or localised peritonitis and sepsis. Here we report a case of a 25 years old female who presented with puerperal pyrexia. Investigations revealed uterine scar rupture with a uterovesical collection. The case was managed conservatively with intravenous antibiotics, an intracervical Foley's catheter and a gonadotropin-releasing hormone agonist. Management includes a conservative approach, laparotomy with reapproximation of the scar margins and hysterectomy. We hereby report a case of postcesarean uterine scar rupture which was managed conservatively with cervical catheter placement and intravenous antibiotics.The rate of caesarean sections is increasing worldwide. Due to this, infrequent complications of Lower Segment Caesarean Section (LSCS) have been encountered. Uterine scar dehiscence or rupture following caesarean section is one of the rare complications. It is the opening of the uterine incision line and the frequency is between 0.06-3.8%.A 25-year-old with obstetric index P1L1 woman was referred to our hospital for the care of puerperal pyrexia from the periphery of Eastern Nepal. The patient had undergone Emergency LSCS 10 days prior to admission at a hospital in Okhaldhunga and was admitted via emergency with chief complaints of fever for 9 days. LSCS was done for non-progression of labour following 20 hours of labour. She had rupture of membranes hours prior to presentation at the hospital and induction of labour was done after hours.Her temperature was recorded to be 102\u00b0F on her first Postoperative Day (POD). She was diagnosed with a urinary tract infection and was started on Ceftriaxone. The fever did not subside in a week and she was referred to a higher centre for diagnosis and further management.At the time of presentation, her general condition was good. She had a temperature of 100.9\u00b0F. Uterine involution was normal and the incision site of the skin was intact. Anterior abdominal wall oedema was present. The patient's investigations showed raised white blood cell count, predominantly neutrophils. And a routine microscopic examination of urine showed plenty of pus cells. Her biochemical parameters were within normal limits. Ultrasonography (USG) showed a defect in the lower anterior myometrial wall measuring 19.4 mm with heterogeneous collections measuring 28.5 ml within the uterine cavity communicating anteriorly forming a localised collection in the uterovesical pouch. The moderate intraperitoneal collection was noted .Magnetic Resonance Imaging (MRI) was done the next day which showed a defect of 7 mm seen within the anterior uterine wall which is in direct continuity with a heterogeneous collection measuring 34 \u00d7 46 mm anterior to the uterus within the pelvic cavity. It also showed a moderate amount of oedema within the subcutaneous tissues of the abdominal wall.Acinetobacter baumanii was isolated in wound swab culture and sensitivity which was sensitive to meropenem and antibiotics were changed accordingly. Other cultures didn't show the growth of any organisms. The fever subsided and a repeat USG done 9 days later showed a defect of 5 mm in the lower anterior myometrial wall and minimal heterogeneous collection of 3.24 ml anterior to the uterus was placed to drain intrauterine collection 3 days following admission. Around 5 ml of the collection was drained and sent for culture and sensitivity. e uterus .st POD. She was started on Gonadotropin-Releasing Hormone (GnRH) analogue (Injection Leuprolide) 3.75 mg intramuscular monthly for 3 months. USG done 17 days following admission only revealed inflammatory changes anterior to the uterus. No defects and collections were visualised. The patient was followed after 3 months and USG was done which revealed closure of the defect.The patient was afebrile and the intracervical Foley catheter was kept for 10 days and removed. The total collection was 20 ml. The skin incision was resutured on 211 A study reported a 25 years old woman with uterine scar dehiscence 1 week following emergency caesarean section who presented with LSCS wound infection and was managed with exploratory laparotomy and reapproximation of the uterine defect with interrupted sutures.4Postpartum uterine scar dehiscence or rupture is a rare but potentially life-threatening condition characterised by the opening of the uterine incision. The important risk factors include diabetes, emergency surgery, surgical technique, infection, haematoma on the uterine incision line, uterovesical hematoma, previous caesarean section, classical caesarean section, abnormal placentation and inappropriate oxytocin administration.1A study reported three cases of post-cesarean uterine scar dehiscence managed conservatively. All three cases presented 1-2 weeks following caesarean section with complaints of abdominal pain and purulent vaginal discharge. All cases were managed with intravenous antibiotics and were discharged within 2-4 weeks.5 A case of 35 years female who presented on the fifth POD following LSCS with pain and abdominal distention has also been published. Computed Tomography (CT) scan showed dehiscence of lower uterine caesarean section incision.A case of 27 years old with previous two LSCS with secondary Postpartum Haemorrhage (PPH) following uterine scar dehiscence following elective caesarean section 10 weeks back and managed with emergency laparotomy with a total abdominal hysterectomy and blood transfusion has been reported.6 Any imaging modalities such as ultrasonography, magnetic resonance imaging, and computed tomography can be used for the diagnosis of post-cesarean scar rupture.1The case was managed with emergency laparotomy with hysterectomy.4 Conservative approach with intravenous antibiotics and drainage of the pelvic collection can be considered in stable patients with no evidence of active haemorrhage or severe infection.1Exploratory laparotomy must be considered the most important tool for diagnosis and treatment for uterine scar dehiscence or rupture. A conservative approach with reapproximation of healthy margins can be considered. However, in case of marked wound infection, endomyometritis, and/or intraabdominal abscess, a hysterectomy must be considered.This case report highlights a rare but important complication of caesarean section. As this condition is rarely encountered, there are no published guidelines regarding the management of the condition. Intracervical catheter placement and postponement of menstruation by GnRH analogue were different techniques employed in this case. A high degree of suspicion with appropriate investigations will help in identifying postpartum uterine wound dehiscence or rupture and its timely management will prevent maternal mortality and morbidity."} +{"text": "The COVID-19 pandemic has seen a considerable expansion in the way work settings are structured, with a continuum emerging between working fully in-person and from home. The pandemic has also exacerbated many risk factors for poor mental health in the workplace, especially in public-facing jobs. Therefore, we sought to test the potential relationship between work setting and self-rated mental health. To do so, we modeled the association of work setting on self-rated mental health (Excellent/Very Good/Good vs. Fair/Poor) in an online survey of Canadian workers during the third wave of COVID-19. The mediating effects of vaccination, masking, and distancing were explored due to the potential effect of COVID-19-related stress on mental health among those working in-person. Among 1576 workers, most reported hybrid work (77.2%). Most also reported good self-rated mental health (80.7%). Exclusive work from home and exclusive in-person work were associated with poorer self-rated mental health than hybrid work. Vaccine status mediated only a small proportion of this relationship (7%), while masking and physical distancing were not mediators. We conclude that hybrid work arrangements were associated with positive self-rated mental health. Compliance with vaccination, masking, and distancing recommendations did not meaningfully mediate this relationship. The COVID-19 pandemic has exacerbated many risk factors for poor mental health in the workplace. As this pandemic has intensified, with rising cases and deaths globally, so too have feelings of worry and fear in response to ongoing COVID-19 community transmission . StudiesAlongside healthcare workers, many low-wage service workers have been deemed essential workers in Canada, and like other front-facing workers at the start of the pandemic, these workers have not always had access to safe working environments ,13. At sWhile mental health risks are well-known among public-facing workers, it is less clear what the mental health impacts are on workers who have been able to transition to working from home. Workers at home may experience a more complex impact of their work settings on their mental health, despite having a generally lower risk situation ,21,22. AThe literature exploring differences in mental health outcomes between workers in public-facing occupations and those working from home in Canada has been sparse ,27. One Furthermore, the first doses of the vaccine rollout for the general population in Canada were underway during the third wave of the pandemic in 2021, bringing about another layer of nuance to consider when assessing mental health of . This dePresently, at the end of the sixth wave of the COVID-19 pandemic has seen jurisdictions move further away from public health orders, following roll-outs of third doses for the majority of working age adults in response to the Omicron variant ,35. It rThis study used survey data collected during the third wave of the COVID-19 pandemic in Canada to examiThe study utilized the Canadian Social Connection Survey (CSCS) dataset, which collected data from 21 April to 1 June 2021. The survey was circulated on the internet using paid advertising on Facebook, Twitter, Instagram, and Google. Participants were eligible if they were Canadian residents and 16 years of age or older. Ethics approval was granted by the University of Victoria Research Ethics Board (Ethics Protocol Number 21-0115) . All parA total of 2286 eligible participants completed the survey. Of these, 1917 were working during the COVID-19 pandemic. We excluded participants with missing observations on the primary outcome and primary exposure variable ; thus, the analytic sample size for this analysis was 1576.Respondents\u2019 self-rated mental health was the primary outcome variable for the study. This variable has previously shown a positive correlation to other mental health morbidity measures , but shoWork setting (listed as work_from_home in the dataset) was the primary explanatory variable for the study. The variable measured how often participants worked from home . The levels \u201cVery little of the time\u201d, \u201cSome of the time\u201d, and \u201cMost of the time\u201d were collapsed into a single level\u2014\u201cHybrid\u201d. \u201cNot at all\u201d was recoded as \u201cDo Not Work from Home\u201d and \u201cAll of the time\u201d was recoded as \u201cWork from Home Only\u201d. These levels allowed for a continuum of working from home to be represented. Participants who reported not working during COVID-19 were removed from analyses as our goal was to explore the effects among Canadian workers who were currently employed.Other explanatory variables related to employment, adherence to COVID-19 mitigation measures, income, and identity were controlled for in multivariable analysis. This allowed us to isolate the effects of demographic and socio-economic factors which may otherwise play an important role in self-rated mental health while also being correlated with work setting. The included variables were household income , age , gender , ethnicity , educational attainment , hours worked per week , national occupation class .In addition to these conventional confounding variables, several additional variables were selected based on their potential to mediate the relationship between self-reported mental health and work setting. COVID-19 vaccine status and adherence to mask and/or physical distancing recommendations were identified as particularly important factors with mediation potential. These concepts were measured by asking to what extent participants wore masks in public , to what extent participants practice physical distancing in public , and whether participants were vaccinated .All statistical analyses were performed using R Statistical Software version 4.1.1 ; DescToo2 and variance inflation factor were assessed for reasonability of model fit and collinearity, with variables exhibiting collinearity removed to arrive at a final multivariable model. Bivariable logistic regression models were constructed from the newly developed study sample between all explanatory variables and the outcome variable.An initial multivariable binary logistic regression model , with thMediation analysis was followed firstly via Baron and Kenney\u2019s (1986) steps for determining mediation via logistic regression models and secondly by utilizing the mediate package in R with bootstrapping enabled ,49. The 2286 respondents were initially included. However, 370 indicated they were not currently employed and of the remaining 1916 employed respondents, 340 were missing data on our primary measures. This resulted in 1576 participants eligible for analysis. Descriptive statistics, stratified by self-rated mental health, are presented in Bivariable associations were investigated between all explanatory variables and self-rated mental health . AssociaIn the multivariable model, after controlling for potential confounders, negative self-rated mental health retained the association with not working from home and working from home exclusively versus hybrid work. Furthermore, negative self-rated mental health was significantly associated with increasing hours worked per week, being 40 years or older (vs. 18 to 29 years-old), identifying as non-binary (vs. man), Middle Eastern or Other ethnicity (vs. White), Conversely, positive self-rated mental health was associated with employment in business, health, management, natural and applied sciences, or trades, transport and equipment operations ; and having two doses of a COVID-19 vaccine (vs. not having received any).p = 0.02), mediating approximately 7% of the relationship between work setting and self-rated mental health; mask wearing (p = 0.76) and physical distancing (p = 0.20) were not found to significantly mediate the relationship. In the mediation analyses for vaccination status, the first part of the pathway between work setting and self-rated mental health, when adjusting for having received a COVID-19 vaccine, shows not working from home is significantly associated with negative self-rated mental health . The next part of the pathway between work setting and having received a COVID-19 vaccine indicates people not working from home had lower odds of having at least one dose of a COVID-19 vaccine . The last part of the pathway shows a significant association between having received a COVID-19 vaccine and positive self-rated mental health .This study represents a preliminary assessment of the relationship between work setting and self-rated mental health, controlling for relevant demographic factors, and providing several preliminary insights into the ways in which COVID-19 stressors and protections shape these relationships. In doing so, our findings show that mental health is adversely impacted for those either working exclusively from home or in person. This is in agreement with existing literature showing poor mental health among workers in public-facing workspaces across numerous international contexts ,12,13,14As such, these findings help to further research into the mental health outcomes of the Canadian workforce during the later phases of ongoing COVID-19 pandemic and beyond. One Canadian study exploring the relationship between working from home and self-rated mental health during the first wave of the pandemic found that workers who transitioned to working from home did not differ or have affected mental health when compared to those who remained working in-person. Conversely, another Canadian study from the first wave of the pandemic found lower prevalence of depression and anxiety among respondents working from home or those working in person whose employers met all of their infection control needs . These fThe mediation analysis found that, of the three variables tested, COVID-19 vaccination status was the only significant mediator of the effect of work setting on self-rated mental health. However, this variable mediated only approximately 7% of the effect of work setting on self-rated mental health. Both the lack of significance and the low impact of the mediation among the variables tested suggests that the prominent source of psychological stress may not arise from fear of COVID-19 infection. Although it is likely that these prevention measures may do less to mediate mental health among workers who are not continually facing risk of viral exposure, it is less clear why this would also be the case for public-facing workers. One possibility could be that, by the later phases of the COVID-19 pandemic, workplaces already tended to have high levels of COVID-19 control measures in place , likely This study also highlighted poor negative mental health among several groups. Though we did not specifically explore groups that are more likely to work from home, concerns have been raised about the well-being of ethnic minority groups who disproportionately work in public-facing occupations . These sThe identity groups associated with negative self-rated mental health\u2014non-binary individuals and people over 40 years\u2014are less clear in terms of contextualizing within work setting. For non-binary individuals, it is unclear whether they are more likely to work from home; however, it does appear that the pre-pandemic stressors have been compounded by COVID-19 for members of sexual and gender minorities . As for Despite COVID-19 prevention measures not emerging as a primary influencer of self-rated mental health, Canadian provinces such as British Columbia have routinely made it a priority to vaccinate frontline workers, a category of worker who cannot typically work from home . MoreoveThis exploratory study has limitations but provides rationale for more rigorous investigations of the potential benefits of hybrid work. Limitations include our use of secondary data that likely does not fully capture the nuanced associations between work setting and self-rated mental health. These relationships are further simplified by our analytic choices to collapse work setting to three levels and self-rated mental health to two levels. Future studies should explore more comprehensive measures of mental health, including using specific measures of anxiety and depression. Such analyses might be feasible in large surveys, such as ours, through the use of short scales developed for large surveys, such as the PHQ-2 and GAD-2. It is possible that these more specific measures would allow for greater granularity in understanding how working conditions during an ongoing public health crisis is related to mental health and well-being\u2014particularly in terms of the mediating effects of COVID-19 prevention on anxiety and stress (vs. depression). Qualitative research could also be used to better understand specific pathways of poor mental health for those working exclusively from home or in-person. Given limitations in measurement, the results of the current study must be interpreted with caution when considering specific psychological disorders. As well, the dataset over-represented (77.2%) individuals who work in hybrid arrangements, compared to the other two groups (exclusively working from home and exclusively working in-person). Caution should therefore be taken in interpretation, as this drastically departs from the range of Canadian workers working the majority of their hours from home\u201440.5% in April 2020 to 26.5% in June 2021 . Lastly,Recognizing these limitations, as well as several opportunities to establish new lines of inquiry, we recommend that future research on the COVID-19 pandemic and future communicable disease epidemics should aim to sample a more representative group of people working from home; determine interactions between ethnic, sexual and gender minorities, and older populations; and incorporate measures of self-assessed psychological distress around workplace safety. Furthermore, as noted above, the present study did not account for important and salient factors such as living conditions, household composition, sources of material, social, and emotional support, non-work-related labor, and other undoubtedly important factors. Future research will explore these factors in relation to working arrangements. Such analyses are critical for understanding the gendered dynamics of work from home. We hypothesize that this would be a critical moderator for exploration in future research. As well, family composition and income are critical moderators for understanding how people can best be supported in distance work environments. Therefore, future research should conduct more narrow analyses or improve measurements of these key factors so that a more nuanced profile of working conditions can be assessed in relation to our research questions. Finally, it is critical for longitudinal within person studies to continue examining the effect of work from home on individual health and wellbeing.Given the few studies that are available assessing the effect of work setting on mental health, this study provides important data demonstrating potential hazards to mental health associated with exclusively in-person or home-based work. Hybrid models of work may therefore provide promising opportunities to improve the mental health of workers. Of course, replication will further advance our understanding of telecommuting and in-person work, particularly in the context of an ongoing public health crisis that has disproportionately impacted low-wage and marginalized people."} +{"text": "N = 423; Mage = 19.96 years, SD = 3.09). The results examined the mechanism through which family incivility was significantly related to cyberbullying perpetration through the mediation of negative emotions, suggesting a strong link of stressful life events to online aggression. In addition, high levels of neuroticism moderated the relationship between family incivility and cyberbullying perpetration, as well as that between family incivility and negative emotions. The study revealed the chronic and potential impact of family incivility, underlined the interaction between stressful life events and online aggression, and put forward the intervention strategies of cyberbullying among university students.Previous research has extended the stress literature by exploring the relationship between family incivility and cyberbullying perpetration, yet relatively less attention has been paid to underlying psychological mechanisms between that relationship among university students. According to the Frustration-Aggression Theory, this study examined the relationships of family incivility, cyberbullying perpetration, negative emotions and neuroticism among Chinese university students. Data were collected from 814 university students (females, Family interaction plays an important role in the development of individual social emotion and cognition . RecentlPrevious studies examined the associations between family incivility and adults\u2019 work performance . Less isRecently, the booming of Internet and social media has intensified cyberbullying in China . Early iThis study has three-fold contributions. Firstly, although previous studies have been devoted to the influence of family incivility on adolescents\u2019 cyberbullying perpetration , it is uThe frustration-aggression theory states that frustration can affect the inclination to act aggressively . IndividH1: Family incivility will be positively correlated with cyberbullying perpetration among university students.Negative emotions are fundamentally a subjective experience of unpleasant or depressed mood in the past week, including various annoying emotional states, e.g., depression, anxiety, and fear , which mIn addition, H2: Negative emotion will mediate the direct relationship between family incivility and cyberbullying perpetration among university students. Specifically, family incivility increases university students\u2019 negative emotions, leading to cyberbullying perpetration.Personality is a relatively stable individual trait, which has a long-term impact on individual behavioral style . The FivPrevious studies have considered personality as a moderator that influences the association between stressful life events and negative emotions. A cross-sectional study found highly neurotic individuals were more vulnerable to depression in face of stressful life events . A case H3: Neuroticism will moderate the relationship between family incivility and negative emotions, so that the positive correlation between family incivility and negative emotions is stronger for highly neurotic university students, and vice versa.H4: Neuroticism will moderate the relationship between family incivility and cyberbullying perpetration, so that the positive correlation between family incivility and cyberbullying perpetration is stronger for highly neurotic university students, and vice versa.Taken together, the whole research model is presented in A total of 814 participants of this study were recruited from a university in Zhejiang Province, China. They are aged 17\u201326 years old [mean(M) \u00b1 standard deviation (SD) = 19.96 \u00b1 3.090], with 391 (48%) being males and 423 (52%) being females. Among the participants, 403 (49.5%) were freshmen, 198 (24.3%) were sophomores, 92(11.3%) were juniors and 121(14.9%) were seniors. Average monthly household income ranges from less than 2000 yuan to more than 10000 yuan . Average daily smartphone usage time ranges from 1 h to more than 9 h .The family incivility scale was originated from the Workplace Incivility scale modifiedThe Chinese version of Depression Anxiety and Stress Scale 21 (DASS-21) was usedThe Chinese version of Cyberbullying Scale (CVCS) was usedThe Chinese Big Five Personality Inventory Brief Version (CBF-PI-B) was usedThe variables of participants\u2019 age, gender, average monthly household income, and average daily smartphone usage time were controlled for, as former studies showed that they might affect negative emotions and cyberbullying significantly .Firstly, we calculated descriptive statistics and correlations matrix. To facilitate result interpretation and avoid the multicollinearity problem , all theAs this study aimed at exploring whether negative emotions would mediate the association between family incivility and cyberbullying perpetration and whether this mediation effect would be moderated by neuroticism, the analysis included the following three steps.The means, standard deviations, and correlation coefficients for all variables of the current study are displayed in p < 0.001), and negative emotions was positively associated with cyberbullying perpetration . Finally, it was found that family incivility had an indirect effect on cyberbullying perpetration (\u03b2 = 0.18). Bootstrapping results confirmed the significance of the indirect effect, with a 95% confident interval of . Therefore, H2 was supported.p < 0.001) and cyberbullying perpetration .The results for H3 and H4 are reported in high Neuroticism = 7.44, t = 9.84, p < 0.001). When participants with low levels of neuroticism, the moderating association between family incivility and negative emotions was insignificant . Additionally, as shown in high Neuroticism = 7.44, t = 0.04, p < 0.001), whereas the moderating association between family incivility and cyberbullying perpetration was insignificant for participants with low levels of neuroticism .Next, we plotted simple slopes which predicted the relationship between family incivility and negative emotions as well as that between family incivility and cyberbullying perpetration, separately for high and low levels of neuroticism. As presented in The current study aims at investigating how family incivility affects university students\u2019 cyberbullying perpetration, as well as the role of negative emotions and neuroticism in the above relationship, which enriches previous research on family incivility and extends the frustration-aggression theory. Specifically, the results indicated that family incivility was positively related to cyberbullying perpetration among university students through negative emotions. The effects of family incivility on negative emotions and cyberbullying perpetration were stronger for university students with high levels of neuroticism. However, low levels of neuroticism hardly moderated the relationship between family incivility and negative emotions, as well as that between family incivility and cyberbullying perpetration, which is consistent with previous studies demonstrating that low levels of neuroticism hardly affected individual mental health problems and problematic behaviors . ContrarOur study contributed to the current literature from four aspects. Firstly, although abundant research has been conducted in the field of family incivility, most of them were focused on the effect of family incivility in the family work context . Little Secondly, we examined the mediating role of negative emotions and found that negative emotions played a mediating role between family incivility and cyberbullying perpetration, supporting Hypothesis 2. Previous studies have demonstrated that stress, anxiety, and depression were the most frequent mental illnesses among university students , which wThirdly, the results partially support the moderating role of neuroticism. Neuroticism reinforces the person\u2019s stress responses and the person with neuroticism is more vulnerable to stress . NeurotiFinally, this study enriches the applicability of the frustration-aggression theory among contemporary university students. It provides empirical support for this theory and explores the interactive interface between online and offline environments. The family incivility is a low-intensity and inconspicuous stressful life events . TherefoThis research also has some practical implications. As the findings of this study, there was a positive correlation between family incivility, negative emotions, and cyberbullying perpetration, suggesting that parents should avoid negative family interactions,such as neglect, rejection and probing into privacy, and establish a respectful, harmonious and intimate family relationship to restrain individual negative emotions and cyberbullying perpetration.Moreover, university students\u2019 mental health is closely related to their growing experience. The research reveals that the lasting influence of negative family interaction on university students is hardly weakened even if they have left their families away to live in a new environment. Therefore, psychological education should be united with students\u2019 families, which is beneficial to prevent university students\u2019 negative emotions from the source and intervene the vicious circle of cyberbullying in university efficiently.Finally, the findings suggest that neuroticism plays an important role in how individuals interact with stressful life events. Highly neurotic individuals are more vulnerable to pressure events and prone to cyberbullying. Therefore, for university students with high levels of neuroticism, they should learn to manage their emotions and maintain emotional stability, to alleviate the negative emotions caused by family incivility.The current study still has several limitations. First, this study is a cross-sectional study rather than a longitudinal one, so that we can hardly evaluate the causal relationship between various variables. As reported by Secondly, to preliminarily reflect the impact of family incivility on the mental health of Chinese university students, this study takes negative emotions as an overall intermediary variable, but to some extent fails to reflect the stronger impact of family incivility on negative emotions among the three dimensions. However, anxiety and stress are phenomenologically different . Future Finally, to improve the theoretical construction and practical application of incivility in frustration-aggression model , the infWe investigated the correlations between family incivility and cyberbullying perpetration among Chinese university students and examined the mediated moderation model of negative emotions and neuroticism. Overall, family incivility was positively correlated with negative emotions and cyberbullying perpetration among university students. Negative emotion played a mediating role in the influence of family incivility on cyberbullying perpetration. Neuroticism can regulate the impact of family incivility on negative emotions and cyberbullying perpetration prospectively. High levels of neuroticism can increase the impact of family incivility on cyberbullying perpetration and on negative emotions, while low levels of neuroticism had no such effect on the relationships. This study provides an insight for exploring how family incivility affects university students\u2019 negative emotions and aggression. It also constructs a theoretical model for how family incivility affects the development of university students.The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.The studies involving human participants were reviewed and approved by the Ethics Committee of Faculty of Psychology, Ningbo University. Written informed consent to participate in this study was provided by the participants and the participants\u2019 legal guardian/next of kin. Written informed consent was obtained from the individual(s), and minor(s)\u2019 legal guardian/next of kin, for the publication of any potentially identifiable images or data included in this article.JG, JX, and FL designed the work. JM and JW analyzed the data results. JX drafted the manuscript. JG, FL, and LW revised the manuscript. All authors contributed to the article and approved the submitted version."} +{"text": "However, Corynebacterium diphtheriae cells lacking mdbA are viable at 30 \u00b0C, suggesting the presence of alternative oxidoreductase(s) recompensing the loss of mdbA. Using genetic suppressor, structural, and biochemical analyses, we provide evidence to support that C. diphtheriae encodes TsdA as a compensatory thiol-disulfide oxidoreductase safeguarding oxidative protein folding in this actinobacterium against thermal stress. This study expands our understanding of oxidative protein folding mechanisms in the understudied Actinobacteria.Oxidative protein folding via disulfide bond formation is an important process in bacteria, although it can be dispensable in various organisms. In many gram-positive Actinobacteria, deletion of Actinomyces oris and Corynebacterium matruchotii, the conserved thiol-disulfide oxidoreductase MdbA that catalyzes oxidative folding of exported proteins is essential for bacterial viability by an unidentified mechanism. Intriguingly, in Corynebacterium diphtheriae, the deletion of mdbA blocks cell growth only at 37 \u00b0C but not at 30 \u00b0C, suggesting the presence of alternative oxidoreductase enzyme(s). By isolating spontaneous thermotolerant revertants of the mdbA mutant at 37 \u00b0C, we obtained genetic suppressors, all mapped to a single T-to-G mutation within the promoter region of tsdA, causing its elevated expression. Strikingly, increased expression of tsdA\u2014via suppressor mutations or a constitutive promoter\u2014rescues the pilus assembly and toxin production defects of this mutant, hence compensating for the loss of mdbA. Structural, genetic, and biochemical analyses demonstrated TsdA is a membrane-tethered thiol-disulfide oxidoreductase with a conserved CxxC motif that can substitute for MdbA in mediating oxidative folding of pilin and toxin substrates. Together with our observation that tsdA expression is upregulated at nonpermissive temperature (40 \u00b0C) in wild-type cells, we posit that TsdA has evolved as a compensatory thiol-disulfide oxidoreductase that safeguards oxidative protein folding in C. diphtheriae against thermal stress.In many gram-positive Actinobacteria, including Escherichia coli DsbA and DsbB proteins constitute a Dsb-forming complex, with the periplasmic thiol-disulfide oxidoreductase DsbA catalyzing Dsb formation in cysteine-containing substrates formation , 2. In tbstrates while thbstrates , 7. DsbBbstrates . Like PDbstrates , 10. Intbstrates . In contresidues , 12, sugA. oris is a major protein folding machine that catalyzes Dsb formation in pilins, which form covalently linked polymers that are essential for polymicrobial interactions and biofilm formation , whose crystal structure harbors a thioredoxin-like fold, an extended \u03b1-helical domain, and a CxxC motif in its active site production, and virulence, MdbA is believed to have broad substrate specificity and constitute a major Dsb-forming machine in this organism , whose expression was upregulated in three independent suppressor mutants, as a thiol-disulfide oxidoreductase capable of replacing MdbA and mediating Dsb formation in both pilin and DT as model substrates. Given our finding that tsdA expression is upregulated under heat stress, we propose that TsdA is a stress-adaptive thiol-disulfide oxidoreductase that protects corynebacteria under stress conditions.To test our hypothesis, we adopted a classic suppressor screen and selected thermotolerant revertants of the mdbA mutant (\u0394mdbA) of C. diphtheriae is viable at 30 \u00b0C were recovered. In subsequent tests, these suppressors retained the thermotolerant phenotype and grew like wild-type cells at the nonpermissive temperature , a phenotype indicative of cell division defects as previously reported A. It is forming) E 15). F. FmdbA itsdA, we identified the transcriptional start site (TSS) of tsdA by implementing Rapid Amplification of cDNA Ends (5\u2032-RACE) with total RNA samples isolated from the wild-type and S1 strains. 5\u2032-RACE PCR products were analyzed by Sanger sequencing, revealing the same TSS of tsdA in both samples located at the 15th nucleotide upstream of the start codon Adenine-Uracil-Guanine (AUG) . This is consistent with our previous findings using RNA-seq analysis or a constitutive promoter (pTsdA*) and electroporated them into the \u0394mdbA mutant. By immunoblotting of protein samples isolated from culture medium (M) and cell wall (W) fractions of these strains, we observed that the \u0394mdbA mutant harboring pT\u2192G or pTsdA* produced SpaA polymers at a level similar to the wild-type strain or the tsdA promoter with T-to-G mutation (pT2G-tsdA). Each vector was electroporated into the tsdA mutant and promoter activity was determined by fluorescence microscopy. As expected, cells expressing GFP from the pT2G-tsdA promoter showed significantly high signal of GFP fluorescence, in contrast to cells expressing GFP from the wild-type tsdA promoter (tsdA promoter were exposed to heat (40 \u00b0C) for 30 min . These results suggest that TsdA may function as a compensatory thiol-disulfide oxidoreductase under heat stress conditions.Given that the suppressors S1, S2, and S3 are thermotolerant as opposed to the \u0394promoter F and G. r 30 min F and G. reatment H, and wereatment I. At thitsdA can rescue the pleotropic defect of C. diphtheriae cells devoid of the primary thiol-disulfide oxidoreductase MdbA suggested that TsdA has Dsb-forming capability. To reveal the structural basis for the potential thiol-disulfide oxidoreductase activity of TsdA, we determined the C. diphtheriae wild-type TsdA structure by X-ray crystallography refracted to 1.45 \u00c5 atomic resolution with R-work and R-free factors equal to 11.4 and 15.3 %, respectively . The structure is a single continuous amino acid chain covering 38 to 282 residues of the protein (full length MW of 31 kDa). Despite using a recombinant protein lacking its transmembrane domain . The thioredoxin-like domain is separated by the amino acid segment 161 to 229 of the \u03b1-helical domain. The conserved catalytic CPFC motif, residues 126 to 129, forms the active site together with a conserved cis-Pro loop (residues T248 and P249) D. The C1nd P249) . The cysnd P249) . In addiSI\u00a0Appendix, Tables\u00a0S1 and S2) , with RMSD equal to 0.760 \u00c5 for 234 aligned residues. The active site configuration is exactly the same as the one in our wild-type TsdA protein structure. The main difference between our structures and 4PWO is the positioning of the N-terminus, which is moved farther away from the protein main body by 2 to 4 \u00c5 . There are also slight changes in conformation of 66 to 77 and 245 to 247 loops; probably resulted from slightly different crystallization conditions and crystal packing, these differences are unlikely to have any biological impact.The TsdA structure very closely resembles the TsdA-C129S protein structure, obtained from the same crystallization conditions, and diffracted to 1.10 \u00c5 atomic resolution with R-work and R-free factors equal to 11.6 and 13.4%, respectively ( and S2) E. Ser129 and S2) D. The re and S2) . According to DALI analysis(Bacillus subtilis oxidoreductase BdbD despite of low amino acid identity at 15.4% (PDB:3EU3) . While BdbD contains a novel metal site . The alignment of those structures has a Z score and RMSD equal to 20.8 and 2.0, respectively, for 186 equivalent residues. The next closest structural homologs are Silicibacter pomeroyi DSS-3 protein (PDP:3GYK) and Wolbachia pipientis thiol-disulfide exchange protein DsbA2. As expected, a TsdA mutant with both cysteines mutated to alanine failed to reduce insulin is similar to the extended sequence, but it lacks the TG dinucleotide at the proper position. Thus, the T-to-G substitution within the promoter might create this properly placed TG element (TTTGGTATTCT), resulting in higher levels of tsdA transcription than the wild-type.The conditional lethality of the isolates \u00a0and\u00a02. SA mutant . Though bacteria \u201334. The complex . The -10tsdA in C. diphtheriae containing mdbA, demonstrating dispensability of TsdA function, our multiple attempts to construct a \u0394tsdA \u0394mdbA double mutant were unsuccessful. Thus, the basal expression of tsdA is required to maintain the viability of the mdbA mutant at 30 \u00b0C. The failure to generate the double tsdA/mdbA mutant also suggests that TsdA may be the only factor that can substitute MdbA. Consistent with this scenario, overexpression of tsdA in the mdbA mutant by reconstituting the vector-borne T-to-G mutation or using an arabinose-inducible promoter in the mdbA mutant rescues the defects in pilus assembly and DT production or heart infusion agar (HIA) plates at 30 \u00b0C or 37 \u00b0C. E. coli DH5\u03b1 and S17-1 used for molecular cloning and gene deletions, respectively, were grown in Luria Broth (LB) at 37 \u00b0C. Kanamycin (Kan) or ampicillin (Amp) was added at 35 \u03bcg/mL or 100 \u03bcg/mL when required. Polyclonal antibodies were raised against TsdA in rabbits as previously described , and the resulting product was digested with HindIII and 5\u2032\u00a0phosphorylated. An arabinose-inducible promoter was PCR-amplified from pBad33 using primers araC_F_PstI and araC_R , and the PCR product was digested with PstI. The two fragments were then ligated into pCGL0243 precut with PstI and HindIII to form pTsdA*. To generate pT\u2192G, a fragment encompassing the promoter region of tsdA and its coding sequence was PCR-amplified from genomic DNA isolated from the C. diphtheriae suppressor strain S1, using Phusion DNA polymerase . The fragments were cloned into the pCGL0243 vector. The generated plasmids were individually electroporated into the C.\u00a0diphtheriae mdbA mutant strain.pTsdA* and pT\u2192G. A fragment encompassing the ribosome binding site and coding sequence of tsdA-sfGFP and pPT2G-sfGFP. The primers PtsdA-HindIII-F and PtsdA-GFP-R were used to PCR-amplify the tsdA promoter region genomic DNA obtained from the wild-type C. diphtheriae NCTC13129 or suppressor S1, using Phusion DNA polymerase . Similarly, the sfGFP coding sequence was PCR-amplified with pBsk-sfGFP as template using primers sfGFP-F and sfGFP-BamHI-R . Overlapping PCR was employed to fuse the two generated fragments. The joined fragment was cloned into pCGL0243, and the generated plasmid was electroporated into appropriate C. diphtheriae strains.pPSISI\u00a0Appendix, Table\u00a0S4) were used to PCR amplify from the genomic DNA of the C. diphtheriae NCTC13129 a fragment coding for residues 33 to 289. The resulting PCR product were cloned into pMCSG7 using ligation-independent cloning as previously reported (E. coli L21 (DE3) for protein expression. This plasmid was used to generate recombinant vectors expressing TsdA variants with mutations in the CxxC motif by site-directed mutagenesis (see below).Recombinant vectors using pMCSG7. To generate a vector expressing a recombinant TsdA protein with a His-tag replacing the N-terminal transmembrane domain, primers H6-TsdA-F and H6-TsdA-R and linked together using overlapping PCR. The 2-kb fragment was then cloned into pK19mobsacB , and the resulting plasmid was introduced into E. coli S17-1 for conjugation with C. diphtheriae. Co-integrates resulting from a single crossover event were selected for growth on kanamycin (50 \u03bcg/mL) and nalidixic acid (35 \u03bcg/mL) plates. Loss of the recombinant plasmid by a second cross-over event resulting in wild-type and mutant alleles was selected for growth on HIA plates containing 10% sucrose. Deletion mutants were identified by PCR and immunoblotting with \u03b1-TsdA.A nonpolar, in-frame deletion mutant of escribed , 41. BriSI\u00a0Appendix, Table\u00a0S4) carrying the target mutations were 5\u2032-phosphorylated and used in PCR-amplification using pMCGS7-TsdA or pTsdA* as templates and Phusion DNA polymerase (NEB). The resulting linear plasmids were purified, ligated, and transformed into E. coli DH5\u03b1. The targeted mutations were confirmed by DNA sequencing.To construct Cys to Ala or Ser mutations at position 126 and 129 of TsdA, overlapping primers . With Nextera technology, the genomic DNA was simultaneously fragmented and tagged with sequencing adapters in a single experimental step. The DNA libraries were sequenced in a 2 \u00d7\u00a0250 nucleotide paired-end run using a MiSeq reagent kit v2 (500 cycles) and a MiSeq desktop sequencer (Illumina). This shotgun genome sequencing resulted in the following numbers of paired reads: 1,781,258 (control), 1,903,102 (S1), 1,278,266 (S2), and 988,758 (S3). The resulting reads were mapped to the C. diphtheriae NCTC 13129 reference genome (GenBank accession number NC_002935.2) using the exact alignment program SARUMAN (Sequencing-ready libraries were constructed with purified genomic DNA of -Wunsch) . Single tsdA TSS was performed using the Invitrogen 5\u2032 RACE system for rapid amplification of cDNA ends. Briefly, first-strand cDNA was PCR amplified from total mRNA obtained from WT and \u0394mdbA strains, using primer GSP1-tsdA , which annealed at the 3\u2032 end of tsdA mRNA, and SuperScriptTM II RT, and generated cDNA was purified. Subsequently, a homopolymeric tail was added to the cDNA 3\u2032 end using dCTP and terminal deoxynucleotidyl transferase (TdT). dC-tailed cDNA was PCR-amplified using Taq DNA polymerase (Fisher Scientific), with GSP2-tsdA primer and abridged anchor primer (AAP) provided by the manufacturer. This PCR product was diluted (0.1%) and used in a nested PCR reaction using GSP3-tsdA primer and Abridged Universal Amplification Primer (AUAP) provided by the manufacturer, to enrich for specific PCR products. The obtained 5\u2032 RACE PCR products were characterized by DNA sequencing to identify specific tsdA TSS(s).Identification of the C. diphtheriae grown at 30 \u00b0C were normalized to an OD600 of 1.0, two volumes of RNA Protect\u00ae Bacteria Reagent (Qiagen) were added, and cells were incubated at room temperature for 5 min. Then cells were collected by centrifugation, washed once with PBS, resuspended in RLT buffer containing \u03b2-marcaptoethanol (BME) and lysed by mechanical disruption with 0.1-mm silica spheres (MP Bio) in a ribolyser (Hybaid). Total RNA from cell lysates was extracted using the RNAeasy Mini Kit (Qiagen). Purified total RNA was treated with DNAse I to digest remaining DNA. After the enzymatic reaction, RNA was cleaned using the RNeasy MinElute Cleanup Kit (Qiagen). cDNA was synthesized with SuperScriptTM II RT First-Strand Synthesis System (Invitrogen). For qRT-PCRs cDNA was mixed with iTAQ SYBR green supermix (Bio-Rad), along with appropriate primer sets . Cycle threshold (TC) values were determined, and the 16S rRNA gene was used as a control to calculate relative mRNA expression level by the 2\u2212\u0394\u0394CT\u00a0method acid (EDDA) was added to the cultures at the concentration of 0.05 mg/mL. After 5 h of inoculation at 30 \u00b0C, the culture supernatant was collected for TCA precipitation and acetone wash.For DT production, overnight cultures of 600 of 1.0, and subjected to cell fractionation as previous reported cells harboring recombinant plasmids were cultured in LB medium supplemented with ampicillin (100 \u00b5g/mL) at 37 \u00b0C. When cells reached OD600 of 0.8, isopropyl \u03b2-D-1-thiogalactopyranoside (IPTG) was added to a final concentration of 0.5 mM for overnight induction at 18 \u00b0C. Cells were harvested by centrifugation and lysed by sonication, and clear lysates were obtained by centrifugation. Recombinant proteins were purified by affinity chromatography using a Ni-NTA epharose column (Qiagen), followed by an Econo-Pac 10DG column (Bio-Rad) in 100 mM potassium acetate buffer, pH 7.5 and stored at \u221220 \u00b0C for further experimentation.His-tagged MdbA or TsdA proteins were purified according to a published procedure with some modification . BrieflyFor crystallization, purified TsdA proteins were digested with 0.15 mg TEV protease per 20 mg of purified protein for 16 h at 4 \u00b0C, and then passed through a Ni-NTA column to remove both the TEV protease and cleaved N-terminal tags. The final step of purification was gel-filtration on HiLoad 16/60 Superdex 200 pg column in 10 mM HEPES buffer pH 7.5, 200 mM NaCl and 1 mM DTT. The protein was concentrated on Amicon Ultracel 10K centrifugal filters (Millipore) up to 60 mg/mL concentration.The initial crystallization condition was determined with a sparse crystallization matrix at 4 \u00b0C and 16 \u00b0C temperatures using the sitting-drop vapor-diffusion technique using MCSG crystallization suite (Microlytic), Pi-minimal and Pi-PEG screen .Single-wavelength X-ray diffraction data were collected at 100 K temperature at the 19-ID and 19BM beamlines of the Structural Biology Center at the AtsdA transcriptional regulation, expression of tsdA was measured using a sfGFP transcriptional reporter. Overnight cultures of C. diphtheriae strains harboring individual sfGFP transcriptional reporters were diluted and grown at 37 \u00b0C until reaching mid-log phase; at this point a subset of bacterial cultures were inoculated at 40 \u00b0C for 30 min. To quantify fluorescence signal, cells harvested by centrifugation were washed with PBS and normalized to OD600 of 0.5. Cell suspension aliquots were dispensed into 96-well, high-binding, clear F-Bottom (Chimney well), black microplates (Greiner bio-one). Fluorescence was measured using excitation/emission wavelengths 485 nm/507 nm with a Tecan M1000 plate reader. Purified sfGFP was used as gain reference and the fluorescence of wild-type strain carrying no plasmid was used as the background signal.To characterize In a parallel experiment, cells were then placed on agarose (1.5%) pads and viewed by an Olympus IX81- ZDC inverted fluorescence microscope using excitation/emission wavelengths 504 nm/510 nm.The reductase activity assay was adapted from previously established protocol . BrieflyDsb formation of FimA was performed as previously described with some modification. Brieflyt test with Welch\u2019s correction determined using GraphPad Prism 5.0 .Results are presented as averages from three independent experiments and SD. Statistical analysis was performed by unpaired Appendix 01 (PDF)Click here for additional data file."} +{"text": "In recent years, new 18F-labeling methodologies such as metal-catalyzed radiofluorination and heteroatom -18F bond formation are being effectively used to synthesize radiopharmaceuticals. This review focuses on recent advances in the synthesis, radiolabeling, and application of a series of 18F-labeled amino acid analogs using new 18F-labeling strategies.Radiolabeled amino acids are an important class of agents for positron emission tomography imaging that target amino acid transporters in many tumor types. Traditional It has also been used to assess treatment response, which can distinguish responders from non-responders before morphological alterations occur [18F]FDG is met with limited success in acquiring images with adequate contrast in brain tumors fluoride, a leaving group and an activating group in the ortho or para position, with respect to the leaving group, is usually necessary in the precursor, which limits the substrate scope. Moreover, amino, hydroxyl, and other active groups in AAs will interfere with the SNAr reaction; thus, protection and deprotection steps are usually required, which may prolong synthesis time. Therefore, new synthesis strategies are always needed for the 18F-labeling of AAs to expand substrate scope and shorten synthesis time.The advantageous nuclear-decay properties of fluorine-18 methods and copper-mediated strategies using pinacol boronate (BPin) ester or boronic acid precursors are currently the most commonly used methods. Light-mediated radiofluorination strategies allow for the formation of 18F-labeled compounds at room temperature and are known to have good functional group tolerance. Fluoride bond formation in heteroatoms such as B-18F, Si-18F, and P-18F is possible at room temperature using 18F-labeling in aqueous media F-DOPA ([18F]1), an AA analog PET tracer used to image the presynaptic dopaminergic system in the brain, was first radiosynthesized with [18F]F2 in 1984; however, its low radiochemical yield limits its use in routine production [18F]1 via an electrophilic reaction , is a difficult task when traditional Sreported ,29,30,31reported ,33. The 18F bonds, and BPin esters and boronic acids were the most popular substrates to label aromatic AAs using CMRF [18F]1 was successfully synthesized with high RCY from an arylBPin precursor , particularly, has emerged as a powerful strategy for constructing C\u2013ing CMRF . In 2014recursor . In a tycted RCY .18F]3a and [18F]3b using a similar CMRF reaction to monitor IDO1 activity in vivo , an enzyme that maintains normal tryptophan homeostasis ,37. Cons in vivo a. [18F]3ma model b; howeveiotracer . Howeveriotracer .18F]KF with diverse aromatic substrates. Two different AA-derived substrates ([18F]4a and [18F]4b) were synthesized in mild conditions with moderate yields. Additionally, [18F]4b was radiosynthesized using an automated synthesis module with high Am (4 \u00b1 2 Ci/\u00b5mol) (The CMRF of aryl(mesityl)iodonium salts is also highly attractive for some merits, including mild reaction conditions, high regioselectivity, etc. . In 2014Ci/\u00b5mol) 32]..18F]KF w18F]4c (2-[18F]FPhe) was also synthesized using this protocol using \u201clow base\u201d or \u201cminimalist\u201d conditions and [18F]5c (protected [18F]1) were obtained with high radiochemical conversion (RCC) using this approach (Trimethyl(phenyl)tin is another attractive precursor for C\u2013[approach 42]..18F]F bo18F]1 and [18F]5d\u20135f were synthesized in two steps using automation module on a preparative scale with isolated RCYs of 32\u201354% tin as a model substrate to label aromatic AAs. [f 32\u201354% 43]..18F]1 anIn 2020, Craig and coworkers reported an alcohol-enhanced CMRF reaction of BPin-substituted Ni\u2013BPX\u2013AAA complexes for the synthesis of diverse radiolabeled AAs . Conveni\u03c0-complex, and [18F]4a was auto-synthesized within 80 min in a 24% isolated RCY (18F]7a ([18F]fluorophenylglutamine) and [18F]7b ([18F]fluorobiphenylglutamine), were also labeled and evaluated using the same method; however, both tracers had a low affinity toward the rat ASCT-2 transporter in vitro and low uptake in the F98 rat xenograft in vivo were 18F-labeled in one step using this method, with the RCC ranging from 12% to 67% for PET imaging . Under illumination of a 3.5-W laser (450 nm) in a mixed system of 4:1 MeCN:t-BuOH with O2 as an oxidant, TEMPO as a redox cocatalyst, and [18F]F\u2212/TBAF as a fluoride source, [18F]1 and [18F]11a,11b were successfully labeled and obtained with good RCY -bearing mice compared with compound [18F]11b that exhibited nonspecific binding was also synthesized using this method with a 22.8% isolated RCY source and by using an organic oxidant rination a. This pated RCY b. This sotracers .3 was used as a phase-transfer agent and an acridinium-based photo oxidant (14) was used under 34-W blue LED irradiation to introduce fluorine-18 into electron-rich arenes. N-Boc-O-methyltyrosines and -phenylalanine ([18F]15a\u201315c) were also successfully radiofluorinated is a common functional group during the synthesis of pharmaceuticals. An increasing number of [18F]trifluoromethylation labeling methods have been reported and used for the radiosynthesis of AA analogs [18F]CuCF3 as the radiofluorination agent and boronic acids or iodides as a leaving group to successfully synthesize Boc/OMe-protected 4-[18F]trifluoromethylphenylalanine ([18F]16a) in high RCY up to 89% using two different strategies is a 18F]trifluoromethylation method to synthesize [18F]trifluoromethyl-L-tryptophan ([18F]15b); however, the radiochemical yield of this tracer was 6%, and molar activity was 0.76 GBq/\u03bcmol -mediated [GBq/\u03bcmol 57]..18F]trif18F-trifluoromethylated cysteine enantiomers as \u201cstructure-mimetic\u201d AA tracers for glioma PET imaging exhibited much lower uptake in most organs, especially in brain tissue, and a 3.81 \u00b1 0.23% ID/g tumor uptake was reported 45 min after injection and histopathology. The results showed that, compared with [18F]FDG and [18F]1, [18F]17 had the highest TNRs in the same orthotopic C6 glioma models, suggesting that the tracer may serve as a valuable tool in the diagnosis of gliomas . I. I18F-tr gliomas .18F-labeling with high stability in aqueous media, the development of the B\u2013F bond gained popularity [18F-19F isotope exchange reaction to label [18F]-BAA is quite simple, does not require HPLC purification, and provides good radiochemical yields and molar activity (>37 GBq/mmol). Furthermore, the biological similarity between the trifluoroborate (-BF3\u2212) and carboxylate groups (COO\u2212) was demonstrated. Cellular assays revealed that the uptake of [18F]-BAAs was AAT-mediated cell uptake, whereas in vivo studies showed high tumor-specific accumulation. Almost all AAs can be 18F-labeled similarly for imaging AA transporter activity. [18F]-Phe-BF3 ([18F]18a), as an analog of Phe, shows specific accumulation in U87MG xenografts. Unlike [18F]FDG, its uptake is low in the normal brain and inflamed regions was radiosynthesized and evaluated in a tumor-bearing animal model inoculated with 4T1 xenografts. Small-animal PET imaging showed visible tumor uptake and rapid renal clearance; however, high uptake by the bone indicates the unsuitability of [18F]18b for clinical translation [18F]FBQ-C2 ([18F]18c) was synthesized to improve the stability of [18F]18b. Stability studies indicated it to be more stable than [18F]18b both in vitro and in vivo, and no obvious bone uptake was determined (<1%ID/g). A competitive inhibition assay revealed that [18F]18c was taken up by cancer cells through the system ASC and N, which is similar to that used by Gln. Moreover, [18F]18c shows better accumulation in tumors than [18F]18b and [18F]FGln during PET imaging, thereby making it a promising PET tracer for tumor diagnosis . Z. Z18F-larference . A recenrference ,66. Thernslation . [18F]FBiagnosis .18F]-B-MET ([18F]18d) was the first 18F-labeled, methionine-based tracer to be synthesized and evaluated in three glioma tumor models using PET imaging. The results revealed that LAT-1 is responsible for tracer uptake in brain-tumor models, and the glioma tumor was rapidly detected by the tracer. Moreover, higher [18F]18d uptake was also found to colocalize with the enhancement in T1-enhanced MRI in an orthotopic U87 human glioma model ([ma model c. Its fa18F]18e ([18F]-FBY) was initially used as a theranostic for imaging-guided boron neutron capture therapy by evaluating the biodistribution of 18e using mouse tumor models [18F]18e to evaluate the safety and radiation dosimetry. The results showed that [18F]18e was cleared mainly through the renal system and was well tolerated by all healthy volunteers with no obvious adverse symptoms. Additionally, [18F]18e was capable of producing an obvious contrast in the glioma tumors of 13 patients with suspected primary or recurrent diffuse gliomas [18F]18e PET scan was performed on 35 patients with suspected malignant brain tumors for further diagnosis. The study group found that all primary glioblastoma, recurrent glioma, and metastatic brain tumor cells could significantly take up [18F]18e [18F]18e shows immense potential in the diagnosis, staging, and prognosis of patients with gliomas. In a follow-up study, an 18F-labeled alanine derivative ([18F]18f) was reported for cancer imaging. Good tumor contrast was achieved in xenograft tumor-bearing mice, and the tracer could distinguish tumors from inflammation in vivo ([r models . Six hea gliomas . In anot[18F]18e . Therefo in vivo d 73]..18F]18e \u2212 site-specific nucleophilic substitution reaction on phosphonates that can be used for the one-step 18F-labeling of biomolecules containing common active groups. 18F-labeled AA mimics such as [18F]19a ([18F]PFA-Phe) and [18F]19b ([18F]PFA-Leu) have also been radiosynthesized in this way with RCYs of 69% and 35%, respectively can be used as a potential prosthetic group to radiolabel AAs. AAs such as cysteine and tryptophan were successfully conjugated with [18F]20a to obtain yields between 39% and 73%. However, [18F]AA-ESFs showed low stability in rat serum over 2 h, and only [18F]20b had a purity of 12%. The other AA conjugates were completely defluorinated within 1 h (18F] fluorine bond . Th. Th18F]2ine bond .18F]fluorosulfates by sulfur fluoride exchange (SuFEx) between aryl fluorosulfate and [18F]fluoride. This method has been used for the 18F-radiolabeling of Tyr analogs ([18F]21) with 98% RCY indicated the high stability of [18F]21 against in vivo defluorination is not used for imaging AAT. It serves as a prosthetic group that incorporates into peptides and proteins for direct [18F]-fluoride labeling in the late stage -modified phenylalanine through te stage 81,82].,82.18F/118F labeling of AAs is still challenging using traditional SNAr or SN2 substitution reactions for the lack of reactive site. In addition, the complicated structure of the prosthetic group, which may impact the tracer\u2019s bioactivity, hampers its wide utility. Thus, novel radiofluorination methods with short synthesis time and safe agents are urgently needed ;Scale-up synthesis with automation module.As AA PET imaging plays a vital role in metabolic molecular imaging, fast and convenient labeling methods are crucial. However, the y needed . For exa18F radionuclide in a kit-like manner under mild conditions. Additional methods for the radiosynthesis of AA probes for clinical use are thus warranted.Given the continuous demand for novel AA PET radiopharmaceuticals in precision medicine, an ever-growing toolbox of radiofluorination methods is of importance to bridge the gap between the unmet clinical needs and the ongoing progress in modern fluorine-18 chemistry. Radiochemists are always searching for simplified radiochemical methods that are able to introduce the"} +{"text": "The depth of response to platinum in urothelial neoplasm tissues varies greatly. Biomarkers that have practical value in prognosis stratification are increasingly needed. Our study aimed to select a set of BC (bladder cancer)-related genes involved in both platinum resistance and survival, then use these genes to establish the prognostic model.Platinum resistance-related DEGs and tumorigenesis-related DEGs were identified. Ten most predictive co-DEGs were acquired followed by building a risk score model. Survival analysis and ROC (receiver operating characteristic) plot were used to evaluate the predictive accuracy. Combined with age and tumor stages, a nomogram was generated to create a graphical representation of survival rates at 1-, 3-, 5-, and 8-year in BC patients. The prognostic performance was validated in three independent BC datasets with platinum-based chemotherapy. The potential mechanism was explored by enrichment analysis.PPP2R2B, TSPAN7, ATAD3C, SYT15, SAPCD1, AKR1B1, TCHH, AKAP12, AGLN3, and IGF2 were selected for our prognostic model. Patients in high- and low-risk groups exhibited a significant survival difference with HR (hazard ratio) = 2.7 (p < 0.0001). The prognostic nomogram of predicting 3-year OS for BC patients could yield an AUC (area under the curve) of 0.819. In the external validation dataset, the risk score also has a robust predictive ability.A prognostic model derived from platinum resistance-related genes was constructed, we confirmed its value in predicting platinum-based chemotherapy benefits and overall survival for BC patients. The model might assist in therapeutic decisions for bladder malignancy. There were 573,278 new bladder cancer cases and 212,536 deaths worldwide in 2020 (Bladder cancer (BC) is the 10 in 2020 . PlatinuRecent studies have discovered a series of biomarkers for platinum resistance, such as FOXC1 and CircIn this study, we aimed to identify essential bladder cancer-related genes involved in both platinum-based chemotherapy resistance and survival. Based on these genes, we established a risk score model and stratified patients into different risk groups. The robust prognostic ability of this model was verified in three independent BC datasets with platinum-based chemotherapy. Additionally, by integrating clinical features and risk score, a nomogram with enhanced prediction power was built. Besides, in an attempt to have a deeper understanding of this model, we used multiple databases to investigate the expression, functional interaction, and mutation of these genes. Enrichment analyses were carried out to further explore the possible mechanisms. As far as we know, this is the first prognostic model for predicting outcomes and discriminating responses to platinum-based chemotherapy in BC patients. Our model would play an important role in prognosis stratification and assisting individualized treatment.https://portal.gdc.cancer.gov/). Genes with low expression were excluded. Ensemble ID was converted to gene symbol by annotation file downloaded from the GENCODE website (https://www.gencodegenes.org/). In validation data, \u201cbladder cancer\u201d and \u201cchemotherapy resistance\u201d were used as the keywords for searching gene chips from the Gene Expression Omnibus (GEO) (https://www.ncbi.nlm.nih.gov/geo/). The inclusion criteria were as follows: (1) the biospecimens were gained from human primary bladder cells or tissues; (2) containing transcriptomic data; (3) including at least 10 samples in each group; (4) the survival information was available; (5) enrolling patients that had undergone platinum-based chemotherapy; (6) no previous or concomitant immunotherapy. Two independent GEO datasets (GSE13507 and GSE31684) (Table-S1 (Click here). Patients with progressive disease were defined as platinum resistance, while patients with partial response and complete response were defined as platinum sensitive.The overall design of this study is shown in SE31684) \u201317 that SE31684) . Entrez The clinical information of patients enrolled was obtained by using \u201cTCGAbiolinks\u201d package \u201322, by fThe expression data were log2 transformed to make the hazard ratio more significant. Of the 411 cases, 406 unique tumor biospecimens have associated survival information. Based on a criterion that the statistical significance threshold is log-rank P value < 0.05, a set of DEGs that were significantly related to prognosis were derived from univariate Cox regression. \u201cGlmnet\u201d package , 25 was \u03b2i stants for regression coefficient of gene i, Expi stands for the expression level of gene i. Forest plot outlined hazard ratios (HR) and confidence intervals of 10 genes, survival map of them was plotted by the GEPIA 2 website (http://gepia2.cancer-pku.cn). Multicollinearity among them was tested by variation inflation factors (VIF) and correlation coefficients.where Univariable and multivariable Cox regression were performed to weigh up the predictive strength of risk score and other clinical parameters . Some characteristics which have very small numbers, for example, stage i, T0, and T1, were merged with their connected groups, and the results were summarized in forest plots. All patients in each dataset were classified into high-risk group and low-risk group, the cut-off points were based on median risk score (TCGA and GSE13507) or produced by X-tile software (GSE31684 and GSE14208). Survival risk differences between high- and low-risk groups were demonstrated by Kaplan-Meier survival analysis and log-rank test. Time-depended receiver operating characteristic (ROC) curves were applied to evaluate the prognostic performance of gene-based risk score with \u201cTimeROC\u201d package . PatientBased on \u201crms\u201d package, salient clinical parameters in multivariate Cox regression and risk score were enrolled into a nomogram model to predict 1-, 3-, 5-, and 8-year overall survival (OS) of BC patients. The discriminatory capacity of the nomogram model was estimated by ROC curve, meanwhile quantified by area under the curve (AUC) and concordance index (C-index). Sensitivity and specificity of different models were compared by \u201cplotROC\u201d package. Calibration plot revealed predictive accuracy of nomogram by comparing predicted survival rate with observed survival rate at different time points, the value of resampling was set to 1000 to reduce overfitting. Decision curve analysis (DCA) illustrated clinical net benefit of the nomogram model and other prognostic indicators with \u201cdcurves\u201d package, which proved the clinical utility of the nomogram.http://www.gsea-msigdb.org/gsea/msigdb/index.jsp) (We used \u201cclusterProfiler\u201d package to reveadex.jsp) \u201331. Signhttp://www.coremine.com/medical) is a domain-specific information platform that mainly focused on biomedicine research and drug discovery. Employing text-mining, it allows users to navigate relationships among research contents from the latest published scientific literature. The keywords of \u201cneoplasms\u201d, \u201cdrug resistance\u201d, and \u201ccisplatin\u201d were combined with 10 genes as inputs into the search field for co-occurrence analysis, then a graphic network of them was generated.COREMINE Medical website (http://genemania.org) is a resource-rich tool for generating hypotheses about co-expression and functional interactions among genes (GeneMANIA website (ng genes . We impohttps://www.proteinatlas.org) is a database focusing on genome-wide analysis of human proteins, which contains expression data and immunohistochemically (IHC) stained tissue images of each protein-coding gene, establishing a correlation between tumor development and specific gene expression of 17 major cancer types. The expression levels of 10 genes, as well as the subcellular localization of their products between urothelial cancer and normal bladder, were compared by IHC images downloaded from \u201cPathology\u201d and \u201cTissue\u201d sections of THPA website, respectively. For same gene, in order to make the result more comparable, we chose images generated by identical antibody from similar patients. We also obtained 5-year survival rate of high expression group and low expression group regarding each protein to validate their impact on cancer patient survival.The Human Protein Atlas (THPA) is a visualization tool for cancer genomics with large data sources to quantify the constitution of 22 human leukocyte types by using non-negative matrix factorization (NMF) algorithm . Differences among GEO datasets and TCGA dataset were accessed by One-way Analysis of Variance (ANOVA) and Wald test, respectively. For continuous variables, differences between two groups were examined by Wilcoxon-Mann-Whitney (WMW) test or Student's t-test. For categorical variables, Chi-square test or Fisher's exact test were used to analyze assumptions depending on the proportion of groups that contain less than 5 patients. P-value < 0.05 was a statistical significance threshold for all analyses, with p < 0.1, *p < 0.05, **p < 0.01, ***p < 0.001.Statistical analysis was carried out using R software (Click here). Venn diagram (Click here). LASSO Cox regression was used to reduce dimensions and prevent excessive fitting . (Click here). These all together were consistent with HRs of each gene in stepwise regression.We consulted the COREMINE Medical website about these genes. The results exhibited that, except for SYT15, the remaining 9 genes participate in oncogenicity and platinum-based chemotherapy resistance directly or indirectly . FunctioThrough the above steps, we identified 10 platinum resistance-related genes with prognostic value in bladder urothelial carcinoma. In order to investigate the prognosis effect of 10 genes as a whole, we computed risk scores for every patient based on regression coefficients as follows:All patients in training dataset (TCGA-BLCA dataset) were divided into high-risk group and low-risk group based on the median risk score as critical value. Kaplan-Meier (K-M) plot showed a significantly enhanced overall survival of low-risk group than high-risk group . (Click here). In oncogenic signatures, angiogenesis factors , mTOR signaling pathway, E2F1 transcriptional factor, polycomb repressive complex 2 (PRC2), cAMP, and KRAS were enriched by high-risk score, while p53 was downregulated. Implying the close association of gene-based risk score with the occurrence, development, and metastasis of tumor.One thousand two hundred ninety-nine DEGs between high-risk group and low-risk group were defined, including 847 up-regulated genes and 452 down-regulated genes and B. TThe relationship between risk group and immune cells infiltration were analyzed using CIBERSORTx database. Results presented the proportion of macrophages M0 (p < 0.001) and M2 (p < 0.001) were elevated in high-risk group . Linear BC is a biologically diverse disease and progresses to multiple clinical outcomes. Especially for MIBC, which is more aggressive, with a higher acquisition of genomic instability and mutation rate . The preIn the current study, we combined a variety of regression analyses, ten essential genes that participating in both platinum-based chemotherapy resistance and survival were selected, then a prognostic model enrolling these genes was established. The risk score did well in stratifying patients into different risk groups, patients in low-risk group experienced a considerable survival advantage compared to those with a higher risk score. Cox regression showed the negative correlation between risk score and OS . Based on risk score and salient clinical features, a nomogram was generated to present the survival rate graphically. The nomogram model could yield a C-index of 0.727 and an AUC value up to 0.819. DCA and calibration curves also proved its promising prognostic ability. As shown in DCA, by incorporating risk score, the predictive power of clinical characteristics was strengthened. The stability and reproducibility of risk score were examined in other three independent BC datasets including cisplatin-treated patients. Moreover, a high-risk score was significantly associated with platinum resistance in BC patients. All together indicated the potential value of our model in clinical decision about platinum sensitivity and overall survival.Among those ten genes, TSPAN7 and IGF2 have previously been reported to participate in bladder cancer progression and might be potential therapeutic targets. TSPAN7 exerts an anti-tumor effect via the PTEN/PI3K/AKT pathway in urothelial carcinoma . IGF2 paIn addition, most genes in our model were involved in the signaling pathways regulation in other kinds of cancer. In breast cancer, the expression level of PPP2R2B is significantly correlated with a longer distant metastasis-free survival and recurrence-free survival. Downregulation of PPP2R2B reduces the effect of trastuzumab or lapatinib on mTOR signaling, thus weaken the anti-HER2 sensitivity . What isTo delineate the molecular mechanisms underlying the risk score, we executed enrichment analyses. GO and KEGG revealed that the risk score might play its role in extracellular matrix remodeling and cell-cell adhesion, which play an important role in tumor progression and metastasis. GSEA confirmed a significantly reduced p53, together with an elevated level of angiogenesis, histone methylation, silk/threonine kinase, cAMP, and K-ras gene in high-risk group. Many studies have stressed the role of these terms in chemotherapy resistance. For example, RAS and p53 promote or inhibit cisplatin resistance via regulate cellular apoptosis and autophagy in the opposite direction . FurtherHowever, our study also has some limitations. First, on account of the insufficiency of data resources, the platinum resistance-related DEGs were picked from one RNA-seq dataset (TCGA-BLCA dataset), it would be better if we could integrate transcriptomic data from more datasets. Second, we only know those patients had undergone platinum-based chemotherapy, but the exactly therapeutic strategies, such as GC or ddMVAC, were inaccessible to us. The mechanisms of these resistance might contain other than platinum therapy. Third, other information like blood and urine composition analyses, dietary habits, and lifestyle, is unclear. Forth, the proportion of patients that have undergone platinum-based chemotherapy and have definitive records of therapeutic response is relatively small, the inclusion of more eligible patients would be helpful to enhance reliability of our results. Fifth, as all conclusions in this study were processed through bioinformatics, additional biological experiments and multicenter clinical trials will assist in investigating the function of the 10 genes, as well as testing the prognostic ability in the actual world. Despite above limitations, this is the first prognostic model derived from platinum resistance and tumorigenesis, which could transform gene expression matrix into risk score and powerfully stratify patients into different prognostic groups. Bringing age and TMN stages into our model would further boost the predictive capacity. What is more, the clinical risk judgment based on an objective score system accompanied by a nomogram could also reduce the deviation arising from subjective factors of observers. All above demonstrated that our gene-based risk score model has satisfying potential of predicting platinum therapeutic effect and it could assist in conducting personalized treatment.In summary, we identified a gene-based risk score model for bladder cancer patients that has clinical prognostic value not only in survival but also in platinum-based chemotherapy sensitivity. This finding has reference value to clinical treatment decisions and deepens our understanding of platinum-based chemotherapy resistance."} +{"text": "Targeted mutagenesis of a promoter or gene is essential for attaining new functions in microbial and protein engineering efforts. In the burgeoning field of synthetic biology, heterologous genes are expressed in new host organisms. Similarly, natural or designed proteins are mutagenized at targeted positions and screened for gain-of-function mutations. Here, we describe methods to attain complete randomization or controlled mutations in promoters or genes. Combinatorial libraries of one hundred thousands to tens of millions of variants can be created using commercially synthesized oligonucleotides, simply by performing two rounds of polymerase chain reactions. With a suitably engineered reporter in a whole cell, these libraries can be screened rapidly by performing fluorescence-activated cell sorting (FACS). Within a few rounds of positive and negative sorting based on the response from the reporter, the library can rapidly converge to a few optimal or extremely rare variants with desired phenotypes. Library construction, transformation and sequence verification takes 6\u20139\u00a0days and requires only basic molecular biology lab experience. Screening the library by FACS takes 3\u20135\u00a0days and requires training for the specific cytometer used. Further steps after sorting, including colony picking, sequencing, verification, and characterization of individual clones may take longer, depending on number of clones and required experiments. In the field of synthetic biology, rational design of proteins and promoters has gained extensive interest, especially for metabolic engineering efforts . For proThe method described here uses a combination of overlap extension Polymerase Chain Reaction (PCR) and satuFor the successful application of this method, a variety of factors need to be taken into account when designing a library. The following sections outline important considerations when aiming to design, construct and screen a promoter or a protein library .For transcription, the regions approximately 35 and 10 bases upstream of the transcriptional initiation site (referred to as \u221235/\u221210 promoter sites) are particularly important for transcription initiation and therefore a single nucleotide mutation here can have dramatic effects . Sometims putida , Acinetolyi ADP1 and CoryRegions around or within the \u221235/\u221210 sites can also contain operator regions for transcription factors. These operator regions are usually palindromic or pseudo-palindromic sequences to which the DNA binding domains of the transcription regulator bind. There are usually anywhere from one to three such sequences around the promoter region. Modification of the operator region can result in modulated binding affinity of the transcription regulator to the operator region and hence altered function. Randomization of only a few nucleotides in the operator region showed a wide range of repression levels of LacI that included increase in amplitude of response, very tight repression, or very weak repression resulting in constitutive activity of the promoter .cis-acting elements on the mRNA can have a profound effect on translation rates table Henikof. This 2-P. putida using an IclR transcription factor PcaU and site-saturation mutagenesis (SSM) have proven to be extremely useful, however, the relatively low library size that can be achieved through these methods has restricted their application in directed evolution approaches . Most reAfter mutations have been generated using one of the above methods, a variety of cloning methods can be used for insertion into a replicating plasmid or genome. Traditional restriction/ligation methods have largely given way to PCR-based, recombination-based and CRISPR-based methods. The method described here uses Gibson assembly, which is versatile and familiar to many synthetic biology labs, but the protocol described herein could be easily adapted to use via megaprimer methods for insertion such as MEGAWHOP. The MEGAWHOP method traditionally consists of two PCR steps. The first step uses error-prone PCR for the generation of a set of megaprimers with random mutations in the target gene. The second step of PCR uses the megaprimers and the original plasmid as the template, resulting in a large random mutagenesis library . Comparein vitro methods to generate sequence diversity.Regarding chromosomally-targeted mutagenesis, homologous recombination (Recombineering) is the m4\u201310 (in vitro CRISPR/Cas9-mediated mutagenic (ICM) system for construction of designer mutants in a PCR-free approach has been reported is a met4\u201310 barcodedreported . In thisreported . FurtherThe method we present here is capable of quickly generating large libraries of genetic variants and quickly screening them for desired phenotypes. However, the construction of the library relies on degenerate primers. These are produced in such a way as to control the ratio of nucleobases at a given location, but the different sequences may show differences in annealing during PCR resulting in a degree of bias. Ensuring sufficient 3\u2032 complementarity after the mutation site mitigates this effect, but the exact ratio in the final library is not always known. Thus, this protocol includes a verification step of either sequencing a random selection of clones isolated from the library see , or sequThis method is additionally limited by availability of knowledge about the targeted protein or promoter see , as wellIn order to successfully apply this protocol, the prospective user will need experience with basic molecular biology methodologies, including: polymerase chain reaction, agarose gel electrophoresis, gel extraction, bacterial transformation, and bacterial culture, as well as a fundamental understanding of molar ratios and calculations. Experience using FACS is required. Additionally, familiarity with primer design and gene editing software is necessary. Sequences for the targeted gene or promoter need to be known or obtainable.When targeting a gene for diversification, it is helpful to have an understanding of the protein encoded therein, and its structure-function relationship. Familiarity with Rosetta or Alpha\u2022 Oligonucleotides can be purchased from Eurofins or other vendors. Oligonucleotides are dissolved in ultrapure water to a concentration of 50\u00a0\u03bcM\u2022 UltraPure\u2122 DNase/RNase-Free Distilled Water \u2022 Deoxynucleotide (dNTP) solution mix \u00ae High-Fidelity 2X Master Mix, cat. no. M0492L)\u2022 High-fidelity DNA polymerase Fisher BioReagents, cat. no. BP231-100\u2022 Agarose \u2022 Gel Loading Dye, Purple (6X), no SDS, cat. no. B7025S\u2022 DNA ladder \u00ae Nucleic Acid Stain 10,000X Water \u2022 GelRed\u2022 TAE Buffer (Tris-acetate-EDTA) (50X) \u2022 QIAquick Gel Extraction Kit \u2022 QIAquick PCR Purification Kit \u2022 MinElute Reaction Cleanup Kit \u2022 Restriction enzymes and 10X reaction buffer\u2022 T4 DNA ligase \u2022 Antarctic Phosphatase \u2022 High-efficiency bacterial competent cells \u2022 Assembled plasmid DNA library, user supplied.\u2022 SOC Outgrowth Medium e.g., Luria Broth Base \u2022 Phosphate buffered saline (PBS) \u2022 Incubators at appropriate temperature and agitation\u2022 Thermocycler \u2022 Gel electrophoresis system \u2022 ChemiDoc Imaging System (Bio-Rad)\u2022 Cell scrapers\u2022 Tube rotator\u2022 Flow cytometer (FACSAria III flow cytometer) capable of cell sorting based on fluorescence\u2022 PCR tubes\u2022 1.5 and 2\u00a0mL Eppendorf tubes\u2022 14\u00a0mL culture tubes\u2022 Petri dishes\u2022 SnapGene or similar\u2022 Protein modeling software such as ChimeraX or PyMOL\u2022 Rosetta or AlphaFoldThis protocol can be divided into three main components: Library Design, Library Construction, and Library Screening analogous to the Design, Build, Test framework of engineering principles for synthetic biology . The thr5\u2013106, with the workload for each day itemized below. Depending on the library size, a smaller or larger workload can be expected for smaller or larger libraries respectively.In total, this method will take approximately 3\u00a0weeks from initial library design to isolating and characterizing individual clones, for a reasonable library size of 10Day 1: PCR round 1 to generate fragments. Agarose Gel electrophoresis and gel extraction.Day 2: PCR round 2 to assemble, Agarose Gel Electrophoresis and gel extraction,Day 3: Transformation (include main plates and transformant estimation plates).Day 4: Plate scraping (and colony counting), direct use or glycerol stock, extractions for sequencing, inoculation of liquid culture.Day 5: Re-inoculation and induction.Day 6: First round of analysis by flow cytometry and sorting, followed by outgrowth .Day 6: Glycerol stocks, liquid culture of round 1 populations.Day 7: Re-inoculation of round 1 populations, induction.Day 8: Second round of analysis and sorting, outgrowth .Day 9: Glycerol stocks and liquid culture of round 2 populations.Day 10: Re-inoculation and induction.Day 11: Third Round of screening and sorting, outgrowth .Day 12: Glycerol stocks and liquid culture of round 3 populations.Day 13: Re-inoculation and induction.Day 14: Fourth round of screening and sorting, plating with appropriate antibiotic selection (NOTE: Stringently collecting only the top 1% of performers is recommended).Day 15: Picking 24-48 colonies of individual clones.Day 16\u201319: Test individual clones at different conditions in order to characterize properties.The following instructions detail the production of a hypothetical mutation library, wherein two distant regions of a promoter or protein are diversified. Mutation sites can range in size from a single base pair up to any stretch of the sequence reasonably covered by a single PCR primer after factoring in the 3\u2032 overlap needed for the initial PCR and the 5\u2032 overlap needed for overlap extension PCR described below.not be appropriate to use a single primer containing the degenerate bases SMR 5. PCR fragments are then combined via overlap extension PCR. In the first stage of overlap PCR, the fragments (at equimolar ratio) and PCR reagents/enzymes are allowed to react for 8 cycles with an extension time sufficient to copy the largest fragment. Then, primers flanking the entire promoter region are introduced and 25 more cycles are performed with an extension time sufficient for the entire region. Subsequent purification of the PCR product with commercial kits assists the next step .6. The mutated variant library is then assembled into a linearized vector using Gibson assembly or restr7. The assembled plasmid library is transformed into a suitable competent bacterial strain, Transformation may be performed by heat shock or electroporation. Depending on the known or expected transformation efficiency, care must be taken to perform sufficient transformations to achieve appropriate coverage of the library. Commonly one would aim to obtain at least 4-fold coverage e.g., generating \u22651 million transformants for a library with a theoretical diversity of 250,000. This is to ensure that the maximum number of variants is represented in the bacteria.After transformation and recovery, it is advisable to plate approximately 20\u00a0\u00b5L of the recovered bacteria on an agar plate containing the appropriate medium and selective antibiotic. This is to estimate transformation efficiency and therefore the final degree of coverage of the library that was achieved . The remaining recovered bacteria are gently spun down , and the majority of the supernatant recovery medium is removed to allow for plating of the entire volume of transformants. Once plated, the transformants are incubated overnight at a suitable temperature to form colonies.8. Using the quantification plate from step 7, the number of transformants achieved is estimated. If the desired coverage is achieved, the whole library can be pooled by adding a small amount of liquid medium to the plates , and then gently scraping the colonies to collect, using a cell scraper. Repeat the addition of liquid media, scraping and then pool the resulting cell suspensions in a polypropylene tube , sealed tightly, and then rotated for at least 1h to ensure proper mixing of the collected library.9. Glycerol stocks of the collected library are prepared by adding glycerol to aliquots of the library to a final concentration of 20% glycerol, taking note of final OD of the resulting stock. These stocks are suitable for long-term storage in ultracold freezers (\u221280\u00b0C). Note that when reviving culture from stocks it is essential to use sufficient inoculum to achieve full coverage of the library i.e., inoculate fresh culture with a number of cells at least 10-fold greater than the total number of variants in the library.10. In order to verify the integrity and diversity of the library, plasmid DNA is isolated from individual clones or from a small volume of the collected library, using any desired plasmid DNA extraction method . The isolated DNA, along with an appropriate primer, is sent for sequencing by one\u2019s preferred provider . Depending on usual shipping and processing times, it may take several days for the sequencing data to become available. See 11. The library should be screened using media and other growth conditions mirroring the application and desired effect of the mutations. This method assumes testing of cells at mid-log growth phase. Prepare an overnight culture in 3\u00a0mL of selective liquid media, using either the scraped cell suspension from step 8 (if available) or the glycerol stock of the library. Use a sufficiently large volume of inoculum to achieve library coverage (see Step 9).12. Use the overnight culture to inoculate a fresh 3\u00a0mL culture to an initial OD that is about 1/10th of the strain\u2019s stationary phase OD in that media. Incubate the cells with frequent OD monitoring until they reach approximately 50% of stationary OD (mid-log phase). If induction is required, induce at mid-log phase and allow more time for the induced process to proceed.7cells/mL for flow cytometry. Required dilution may vary depending on the requirement of the instrument used, since efficient sorting requires an event rate well below the recommended maximum event rate for any given flow cytometer.13. Dilute a small portion of the cells in 1X phosphate buffered saline to achieve a cell density of approximately 1014. Select the top five percent best performing cells based on biosensor response and sort into fresh (selective) media. Grow the sorted cells overnight to recover.15. Repeat steps 12-14, this time collecting the top two percent only.16. If induction was used, or the library is for a new biosensor or promoter to be optimized, it may be necessary to screen for negative fluorescence in the uninduced state i.e., select a non-fluorescent subset of the uninduced population. This will eliminate constitutively active variants, facilitating isolation of true inducible variants. If deemed necessary, repeat steps 12-14 without induction and collect the bottom 80% in terms of fluorescence.17. Alternate between induced selection and uninduced selection until the desired phenotype is reached or no further improvement is observed between rounds of sorting. An example screening flowchart for an inducible process is shown in 18. Plate cells from the last sort onto selective media and pick individual colonies for characterization.During construction of the library, there will be multiple tests performed at intermediate phases to ensure the components are being generated and assembled correctly. During the initial PCR steps, amplification products are observed on agarose gel and can be checked for the correct size. After the gene is inserted into a vector by Gibson assembly and transduced into a host, a subset of the transformation culture is diluted and used for counting plates (quantification plate) which have the additional benefit of providing isolated colonies from which to test the diversity of the library at each mutation site prior to employing high throughput screening or selection. Before screening, a check may be included to test for mutation diversity . One exaBased on the goals set forth for the library design, the library is expected to exhibit a broad distribution of phenotype in addition to genotype. In the case of libraries whose performance can be tested and screened by fluorescence, the phenotypic range can be measured on a flow cytometer. In the example histogram provided , the majCurrent methods designed to identify gain-of-function mutations for proteins or promoters in microbes are limited by workload, time, and costs. The methodology described here eliminates the bottleneck of testing individual variants of genes and promoters by presenting a protocol for assaying the combinatorial effect of several mutations at once, drastically reducing the time and money required to obtain genetic variants with desirable qualities. The efficacy of the method described here is illustrated by several of our original research papers , which hDue to the high-throughput nature of the method detailed here, the number of genetic variants that can be screened for desired gain-of-function behavior is not limited by the time or manpower available. Instead, millions of variants can conveniently be screened for desirable phenotypes in a single sample tube. Variants that perform well are isolated by FACS and screened further in subsequent rounds, while poor performers are easily discarded."} +{"text": "N-termini of substrates. To study the role of DPP8 and DPP9 in breast cancer, MCF-7 cells and MDA.MB-231 cells were used. The inhibition of DPP8/9 by 1G244 increased the number of lysosomes in both cell lines. This phenotype was more pronounced in MCF-7 cells, in which we observed a separation of autophagosomes and lysosomes in the cytosol upon DPP8/9 inhibition. Likewise, the shRNA-mediated knockdown of either DPP8 or DPP9 induced autophagy and increased lysosomes. DPP8/9 inhibition as well as the knockdown of the DPPs reduced the cell survival and proliferation of MCF-7 cells. Additional treatment of MCF-7 cells with tamoxifen, a selective estrogen receptor modulator (SERM) used to treat patients with luminal breast tumors, further decreased survival and proliferation, as well as increased cell death. In summary, both DPP8 and DPP9 activities confine macroautophagy in breast cancer cells. Thus, their inhibition or knockdown reduces cell viability and sensitizes luminal breast cancer cells to tamoxifen treatment.The cytosolic dipeptidyl-aminopeptidases 8 (DPP8) and 9 (DPP9) belong to the DPPIV serine proteases with the unique characteristic of cleaving off a dipeptide post-proline from the N-termini of substrates preferentially after proline [Proteases are the largest enzyme family in vertebrates, representing about 3% of the human genome . Proteol proline or alani proline ,5. The f proline . DPPIV a proline . DPP8 an proline . Due to proline ,8. Howev proline ,10,11. T proline ,13 and m proline ,14 to ca proline ,16,17.+, PR+, HER2\u2212, Ki-67low) usually being less proliferative than luminal B tumors . The prognosis of patients with luminal tumors is, in general, good, especially due to the use of endocrine therapy. This type of treatment targets estrogen-dependent cancers via selective ER modulators (SERMs) (tamoxifen) or ER degraders (SERDs) (fulvestrant), or interferes with the production of estrogen via aromatase inhibitors [\u2212, PR\u2212, HER2+, Ki-67high) are associated with an intermediate prognosis, especially due to the use of HER2-targeting antibodies blocking pro-oncogenic signaling [\u2212, PR\u2212, HER2\u2212, Ki-67high) are strongly aggressive and therapeutic options are limited to surgery, chemotherapy, and/or radiation.Since 2020, breast cancer represents the most frequent cancer entity worldwide and is still the leading cause of cancer-related deaths among women . Althoughibitors ,23,24. Hignaling . In contDPP8 mRNA and protein levels were detected in MCF-7 , MDA.MB-231 , and MDA.MB-453 cells [Although DPP8 and DPP9 have been shown to influence tumor growth in various entities, like Ewing sarcoma , cervicae) cells . In contIn our study, we investigated the impact of DPP8 and DPP9 on breast cancer cells representing different molecular subtypes. For this purpose, we treated MCF-7 and MDA.MB-231 cells with the DPP8/9 inhibitor 1G244. DPP8/9 inhibition led to the accumulation of acidic vesicles, which was more pronounced in MCF-7 cells compared to MDA.MB-231 cells. Further analysis of MCF-7 cells revealed that DPP8/9 inhibition interferes with the formation of LC3B/LAMP1-double-positive autolysosomes and results in the spatial separation of autophagosomes and lysosomes identified via staining for LC3B and LAMP1. Therefore, DPP8/9 inhibition impairs autolysosome formation as an essential step in the process of macroautophagy. In addition, DPP8/9 inhibition reduced cell survival and proliferation, which was further decreased upon a combinatory treatment with tamoxifen. The knockdown of either DPP8 or DPP9 revealed that both proteases contribute to this phenotype.2. For starvation, the FCS concentration was reduced to 1% FCS. To inhibit DPP8/9, MCF-7 cells were treated with 10 \u00b5M 1G244 and MDA.MB-231 cells with 5 \u00b5M 1G244 , Y0432) or the solvent control DMSO .Cell culture was generally performed under aseptic conditions in a laminar-flow hood. MCF-7 and MDA.MB-231 cells were obtained from the American Type Culture Collection (ATCC). Before performing the experiments, the identities of the cell lines were confirmed with PCR-based genetic-marker testing by Eurofins Genomics. Cells were cultured in DMEM high glucose supplemented with 10% FCS, 1% L-glutamine, and 1% Penicillin/Streptomycin, and were incubated under controlled conditions at 37 \u00b0C and 5% COMCF-7 cells were transfected with either the pTCEBAC or pTREBTM 2000 . For RT-PCR, 1 \u03bcg RNA was used and transcribed into cDNA using the iScript\u2122 cDNA Synthesis Kit , according to the protocol. For qPCR, cDNA was mixed with the SYBRTM Select Master Mix (Thermo Fisher Scientific) and forward/reverse primers. Samples were cycled in the CF96 Real-Time machines (Bio-Rad). The following primers were used: ACTB forward 5\u2032-AGCACTGTGTTGGCGTACAG-3\u2032 and reverse 5\u2032-CTCTTCCAGCCTTCCTTCCT-3\u2032; DPP8 forward 5\u2032-GGCCACAAGGATTTACGCAACAAC-3\u2032 and reverse 5\u2032-AAGGTAGCGACTCCAGCTGATCT-3\u2032; DPP9 forward 5\u2032-TGCAGAAGACGGATGAGTCT-3\u2032 and reverse 5\u2032-GGAATCTCAGAGTAGAGGAG-3\u2032; LAMP1 forward 5\u2032-TCAGCAGGGGAGAGACACGC-3\u2032 and reverse 5\u2032-CGCTGGCCGAGGTCTTGTTG-3\u2032; LC3B forward 5\u2032-ACCATGCCGTCGGAGAAG-3\u2032 and reverse 5\u2032-ATCGTTCTATTATCACCGGGATTTT-3\u2032; p62 (SQSTM1) forward 5\u2032-CCTTTCTGGCCGCTGAGTGC-3\u2032 and reverse 5\u2032-GTCCCCGTCCTCATCCTTTCTCA-3\u2032. Data analysis was performed via the CFX Manager (Bio-Rad).RNA was isolated from cells using a Total RNA Kit (peqGOLD) according to the protocol. The RNA concentration was measured via NanoDrop2; 10 mM KCl; 0.05% Triton-X-100; and 1 mM DTT in ddH2O) and incubated for 10 min on ice before centrifugation at 600 rcf and 4 \u00b0C for 6 min. The protein concentration of the supernatant was measured via the Pierce\u2122 BCA Protein Assay Kit (Thermo Fisher Scientific), according to the protocol. An amount of 5 \u03bcg protein was mixed with 95 \u00b5L hypotonic buffer and 5 \u03bcL of 250 \u00b5M fluorogenic DPP peptide substrate H-Gly-Pro-AMC . Enzyme activity was measured every minute for 1 h via the EnSpire at 480 nm.Cells were washed once with DPBS, incubated in 0.05% Trypsin\u2013EDTA for a few minutes at 37 \u00b0C, and harvested. After two washing steps with DPBS, cells were resuspended in 150 \u03bcL hypotonic buffer with LysoTracker\u2122 Green DND-26 and incubated for 15 min at 37 \u00b0C. Analysis was performed via FACS using the CytoFLEX S and FlowJo 10.6.2 software (BD). For microscopy, cells were seeded on coverslips and stained with LysoTracker\u2122 Green DND-26 diluted in culture medium instead of FACS buffer. In the last 2 min of incubation, Hoechst (1:1000) was added to the medium. Cells were washed once with DPBS, followed by fixation in 4% paraformaldehyde (PFA) for 20 min at RT in the dark. Next, cells were washed once with DPBS, mounted in PermaFluor\u2122 (Thermo Fisher Scientific) on slides, and dried overnight. Analysis was performed with the AxioVert 40C fluorescence microscope .4P2O7; 1 mM \u03b2-glycerophosphate; 1% Triton-X-100; 0.001 g/mL SDS; 0.005 g/mL sodium deoxycholate; 1 mM sodium orthovanadate; a PhosSTOP\u2122 tablet/10 mL; and a cOmplete\u2122 ULTRA tablet/10 mL in ddH2O). The lysates were incubated for 15 min on ice, vortexed frequently, and mechanically disrupted via Dounce homogenization. After centrifugation at 800 rcf and 4 \u00b0C for 15 min, the supernatant was used to determine the protein concentration via the PierceTM BCA Protein Assay Kit (Thermo Fisher Scientific). The lysates were mixed with 5\u00d7 protein-loading buffer and incubated at 95 \u00b0C for 5 min.Cells were washed three times with pre-chilled DPBS on ice and harvested via scraping in RIPAplus buffer , ab42075, 1:500), DPP9 , LAMP1 , 3243, 1:500), LC3B , p62 , and TUBA were used. After washing three times with 0.1% PBST for 10 min, membranes were incubated with a secondary antibody for approximately 90 min, followed by three washing steps. As secondary antibodies, Goat-\u03b1-Mouse or Goat-\u03b1-Rabbit , 111-035-003, 1:5000) were used. The membranes were incubated with ECL solution (Thermo Fisher Scientific) for about one minute and afterwards were measured via the VILBER Fusion SL and quantified with FusionCapt Advance software V17.03.Cells on coverslips were fixed in 4% PFA for 20 min and washed once with DPBS. After additional washing with PBST for 10 min, cells were permeabilized via \u221220 \u00b0C methanol for 10 min, followed by 0.2% Saponin in PBST for another 5 min. Next, cells were washed three times with PBST for 10 min and blocked with 0.1% Saponin in 1% BSA in PBST for 20 min. After a further washing step, primary antibodies were added and incubated overnight at 4 \u00b0C. Primary antibodies targeting LAMP1 , LC3B , KIF5B , and Dynein-1 , MA1-070, 1:250) were used. Cells were washed three times with PBST for 10 min. The secondary antibodies were added and incubated in the dark at RT for two hours and, in the last two minutes of incubation, Hoechst (1:1000) was added. As secondary antibodies, Goat-\u03b1-Rat Alexa 555 or Donkey-\u03b1-Rabbit Alexa 488 were used. After a last washing step with PBST, coverslips were mounted in PermaFluor\u2122 on slides and dried overnight. Analysis was performed with the LSM 710 confocal laser scanning microscope (Zeiss).Cells on coverslips were stained for \u03b2-galactosidase using the Senescence \u03b2-Galactosidase Staining Kit , according to the protocol. After staining was completed, cells were mounted in PermaFluor\u2122 on slides and dried overnight. Analysis was performed with the BZ-9000 microscope .2. One day after seeding, 5 \u03bcM 4-hydroxytamoxifen (4-OHT) was added and refreshed every two days for 15 days. After 15 days of culture, cells were rinsed once with DPBS and stained with 1% crystal violet in 20% methanol for 10 min. Cells were washed thoroughly with tap water and air-dried overnight. Pictures were taken using a light desk and the Canon PowerShot G6 camera . Analysis was performed with the ImageJ plugin Colony Area according to the protocol in [For colony formation, 1400 cells/well were seeded in 6-well plates and incubated at 37 \u00b0C and 5% COtocol in .Cells were treated with 10 \u03bcM or 15 \u03bcM 4-OHT for 48 h. Medium was collected, and cells were detached and transferred in the same Falcon tube. Cells were centrifuged at 300 rcf for 5 min, cell pellets were resuspended in 1 mL medium, and 10 \u03bcL was mixed 1:1 with trypan blue to stain dead cells. Stained cells were pipetted into a Neubauer counting chamber and analyzed using a microscope .To analyze the fluorescent reporters of pTCEBAC and pTREBAV, cells were detached from the culture dish after Doxycycline treatment and centrifuged at 300 rcf for 5 min. After centrifugation, cells were washed twice with DPBS and resuspended in FACS buffer. Measurement was performed with the LSR II (BD) and BD FACS Diva 6.1.2 software. Data analysis was performed with FlowJo 10.6.2 software.t-test.Unless stated otherwise, the data of independent experiments are presented as means + standard errors of the means (SEMs). Statistical analyses were carried out with OriginPro 2020 (OriginLab). The statistical significance of the difference of the means between two groups was analyzed via paired-sample DPP8 and DPP9 mRNA expressions and MDA.MB-231 cells were characterized concerning their ressions A. MCF-7 To investigate whether the observed accumulation of vesicles originated from acidic cell compartments, like lysosomes or other organelles fused with lysosomes , cells wLysosomal storage disorders showing similar accumulations of lysosomes present mainly with a defect in autophagy . Thus, pp62 and LAMP1 (lysosomes: red). Fluorescence microscopy revealed the single-stained puncta of both proteins under nutrient-rich conditions with a higher abundance in the DPP8/9-inhibited cells compared to the controls G. Some sThe observed change in the localization of the lysosomes to the periphery of the cell and, as a consequence, the spatial separation from autophagosomes may be caused by defective vesicular transport interfering with autophagosome\u2013lysosome fusion. Here, the Dynein-Dynactin motor complex is mainly responsible for vesicle transport to the perinuclear region (minus end) along microtubules, whereas kinesins facilitate the movement of vesicles to the cell periphery (plus end) . To inveshDPP8: shDPP9: DPP8: DPP9: DPP9 or DPP8, respectively.To investigate the contribution of DPP8 and DPP9 to the regulation of autophagy, MCF-7 cells were transfected with Doxycycline (Dox)-inducible vectors containing an shRNA targeting either DPP8 or DPP9. The induction efficacy was measured via FACS analysis of the fluorescent vectors resulted in an even stronger accumulation of lysosomes, especially in the peripheries of the cells, and less of the motor protein KIF5B in the cytoplasm. Although kinesins like KIF5B are reported to transport vesicles to the cell periphery, the deficiency of KIF5B in HeLa cells results in the spatial separation of autophagosomes, being predominantly in the perinuclear region, and lysosomes distributed in the periphery of the cell [Our results demonstrate that DPP8 and DPP9 play a pivotal role in starvation-induced autophagy, especially during the transport of autophagosomes and lysosomes to different subcellular localizations, as shown by the immunofluorescence labeling of these compartments. Upon autophagy activation , newly fthe cell . This phthe cell . TherefoThis defective lysosomal positioning due to DPP8/9 inhibition seems to cause the measured increase in vesicles belonging to the endolysosomal compartment, especially upon serum starvation, when cells are highly dependent on autophagy ,41. BecaThe impact of autophagy on tumor development and progression is ambiguous ,42. PhysDPP8 and DPP9 are reported to affect proliferation as well as cell death in several ways. The knockdown of DPP9 in oral squamous cell carcinoma cells increased cell growth via FAP-\u03b1 , whereasAbout 70% of all breast malignancies are ER-positive breast cancers, in which mainly the oncogenic ER-signaling pathway promotes malignant cell proliferation and tumor growth ,47. TargHowever, resistance to endocrine therapy is a major clinical problem. Approximately one-third of ER-positive breast tumors develop resistance to endocrine therapy, worsening the prognoses of many breast cancer patients ,49. SeveIn this study, we show that DPP8 and DPP9 play a pivotal role in maintaining autophagic flux and thereby contribute to the better survival of ER-/PR-positive breast tumor cells . This seems to be strongly dependent on vesicle transport, and especially the localization of the motor protein KIF5B, although the mechanism as to how DPP8 and/or DPP9 influence its localization within the cell remains unclear. Furthermore, combinatory treatment using 4-OHT (SERM) and 1G244 (DPP8/9 inhibitor) reduced proliferation as well as enhanced cell death, demonstrating its potency in ER-positive breast cancer cells. Nevertheless, further investigation of breast cancer cell lines representing different molecular subtypes of breast cancer, as well as other cancer entities, is necessary to see whether this phenotype is cell-type-specific or transferable to other cell types."} +{"text": "One of the goals of the Islamic Republic of Iran is to reduce the prevalence of catastrophic health expenditures among Iranian households to 1% by the end of the sixth 5\u2010year development plan (2016\u22122021). This study was conducted to evaluate the level of access to this goal in the final year of this program.A national cross\u2010sectional study was conducted on 2000 Iranian households in five provinces of Iran in 2021. Data were collected through interviews using the World Health Survey questionnaire. Data from households whose health care costs were more than 40% of their capacity to pay were included in the group of households with catastrophic health expanditure (CHE). Determinants of CHE were identified using univariate and multivariate regression analysis.p\u2009<\u20090.05).8.3% of households had experienced CHE. The variables of being a female head of household (odd ratio [OR]\u2009=\u20092.7), use of inpatient (OR\u2009=\u20091.82), dental (OR\u2009=\u20093.09), and rehabilitation services (OR\u2009=\u20096.12), families with disabled members (OR\u2009=\u20092.03) and low economic status of the households (OR\u2009=\u200910.73) were significantly associated with increased odds of facing CHE (In the final year of the sixth 5\u2010year development plan, Iran has not yet achieved its goal of \u201creducing the percentage of households exposed to CHE to 1%.\u201d Policymakers should pay attention to factors increasing the odds of facing CHE in designing interventions. The recall period for using outpatient, dental and rehabilitation services was 1 month (last 30 days) and the recall period for using inpatient services was 1 year (last 12 months).3.Monthly household expenses by type of expense, including total household expenses, household food expenses and OOPs for health services. In this study, the recall period for total household expenses was 1 month (last 30 days).World Health Survey Questionnaire developed by the World Health Organization was used to collect the data.2.3p\u2009=\u20090.5), 95% statistical significance, and 5% accuracy rate. Due to the fact that the study was conducted in 5 provinces, the total sample size was determined to be 2000 households (400 households from each province).The sample size was estimated to be 385 households based on the following formula; taking into account the exposure of 50% of households to CHE of the province was selected. Then, within each county, the center and its villages were selected. Next, 10 centers were randomly selected from the comprehensive health centers in each city and 10 centers from rural health centers. Finally, from the list of households covered by each urban and rural center, 20 households were randomly selected and the questionnaire was completed by visiting the door of the household. The head of the clusters was considered as the first household.2.42.4.1The economic status of households was determined using the method proposed by O'Donnell et al.In this study, the people of the society were divided into 3 categories in terms of socioeconomic status. Categories 1 and 2 were combined as the rich class (Q1), category 3 as the middle class of the society (Q2) and categories 4 and 5 were combined as the poor class (Q3).In some studies, the asset index has been used to determine SES in the Iranian population.2.4.2In this study, the method provided by WHO was used to estimate the incidence of catastrophic health expenditures. Accordingly, if the amount of OOP for health expenses by family members is more than 40% of the households' capacity to pay, the household is in the group of households facing catastrophic health expenditures. Households' capacity to pay means effective income minus household living expenses. The method and details of CHE calculation are given in other studies.2.4.3p\u2009<\u20090.05.Univariate logistic regression analysis was initially used to examine the role of determining variables on the likelihood of experiencing CHE. In the adjusted model, first, by controlling the factors of age, sex, and education, the odds ratio of experiencing CHE was examined for each variable. Then, in the final model (stepwise), to obtain the desired model, after removing the independent variables that did not have much effect on the dependent variable, the odds of experiencing CHE for each variable in the presence of other variables were checked in the multivariate regression analysis model. It should be noted that the \u201cuse of outpatient services\u201d was not included in the multivariate logistic model due to its collinearity with socioeconomic status. Also, it is worth mentioning that in Iran, health insurances are generally divided into two classes of basic and supplementary insurances. Basic insurances are governmental and supplementary insurances are private. Supplementary insurances are purchased to cover the patient's cost share and cover services outside the service package of basic insurances. In Iran, some services such as dental and rehabilitation services are not covered by basic insurance. All tests were performed in STAT 12.0 software (Stata Corporation) and the significance level was set at 3p\u2009<\u20090.05). It should be noted that there was no difference in exposure to CHE between the studied provinces, educational groups, basic insurance coverage status, urban or rural status of the households, and household size (p\u2009>\u20090.05) were male heads of households and most had primary or middle school education (72%), were employed (75%) and only 5.8% (112 people) had supplementary insurance. In this study, the rate of exposure to CHE was reported to be 8.3%, which was significantly higher among users of rehabilitation services (35%), households with a socioeconomic status lower than average (27%), households with disabled members (22%), users of dental services (20%), female head of the household (17%) and being a housewife (17%) and average household total monthly health expenditure was 1.970\u2009\u00b1\u20091.180 million Iranian Rial (7.89 US$).p\u2009<\u20090.05).Table\u00a0p\u2009<\u20090.001, 95% CI: 1.71\u22123.96), for households without supplementary insurance were 5.3 times , for users of outpatient services were 12.7 times , for users of rehabilitation services were 7.3 times , for users of inpatient services were 1.99 times , for households with lower socioeconomic status were 9 times , and for households with a disabled member were 3.8 times .For instance, the odds for female heads facing CHE were 2.6 times higher than male heads (p\u2009<\u20090.05).In the first adjusted model , the odds of encountering CHE in a household without supplementary insurance (OR\u2009=\u20094.35), in users of outpatient (OR\u2009=\u200912.73), hospitalization (OR\u2009=\u20091.68), rehabilitation (OR\u2009=\u20097.01), and dental services (OR\u2009=\u20093.17), in households with a disabled member (OR\u2009=\u20093.75), and in households with a low socioeconomic status (OR\u2009=\u200910.18) were reported to be significantly higher (p\u2009<\u20090.05).In the final models, to obtain the desired model by removing the independent variable that had little effect on the dependent variable, gender of head of the household (OR\u2009=\u20092.75), use of inpatient (OR\u2009=\u20091.82), dental (OR\u2009=\u20093.09), and rehabilitation (OR\u2009=\u20096.12) services, households with disabled members (OR\u2009=\u20092.03), and socioeconomic status of households (OR\u2009=\u200910.73 and 7.12) as final variables were significantly associated with household exposure to CHE , like the two previous development plans, Iran not only did not achieve the goal of reducing the percentage of households exposed to CHE to 1%, but also, compared to the findings of previous studies, this gap seems to have become deeper.According to the findings of another systematic and meta\u2010analysis study in Iran (2017), the prevalence of households faced with CHE was reported to be 3.9% .According to our study, in final models, variables of being female head of household, use of inpatient, outpatient, dental, and rehabilitation services, families with disabled members and households with low socioeconomic status, were significantly associated with increased odds of facing CHE.In most similar studies conducted in Iran, these variables have been reported as variables affecting the CHE.According to our study, having supplementary insurance did not lead to greater financial protection and nor reduced the odds of facing CHE. This finding contradicts the findings of many previous studies.4.1This study was conducted at national level and is the first study at this level to evaluate the goal related to the financial protection of citizens against CHE in the final year of the sixth 5\u2010year development plan of the Islamic Republic of Iran.4.2In this study, data were collected based on self\u2010declaration of individuals, so the recall bias may have occurred about the household expenses, health care costs and the type of health services consumed, although we tried to reduce this type of bias by shortening the recall period.5The findings of this study indicated that in the last year of the sixth 5\u2010year development plan, Iran has not achieved the goal of \u201creducing the percentage of households exposed to CHE to 1%.\u201d Policymakers should pay attention to factors increasing the chance of facing CHE in designing interventions and this can help the goal of financial protection against health costs. Increasing the government's share of total health costs, expanding appropriate prepayment mechanisms, revising service packages under insurance coverage and covering services such as dental and rehabilitation services, increasing the depth of coverage of health costs by insurance companies, payment exemptions for poor households and households with a disabled person can boost the financial protection of households against CHE.Azad Shokri: Conceptualization; data curation; formal analysis; methodology; writing\u2014original draft. Amjad Mohamadi Bolbanabad: Data curation; supervision; writing\u2014original draft. Satar Rezaei: formal analysis; methodology; writing\u2014original draft. Bakhtiar Piroozi: Conceptualization; formal analysis; methodology; supervision; writing\u2014original draft.The authors declare no conflict of interest.This study was approved by ethics committee of Kurdistan University of Medical Sciences with the code IR.MUK.REC.1399.076. Participation in the study was voluntary and written consent was obtained from the participants.The lead author Bakhtiar Piroozi affirms that this manuscript is an honest, accurate, and transparent account of the study being reported; that no important aspects of the study have been omitted; and that any discrepancies from the study as planned have been explained."} +{"text": "Over 150 million people, mostly from low and middle-income countries (LMICs) suffer from catastrophic health expenditure (CHE) every year because of high out-of-pocket (OOP) payments. In Tanzania, OOP payments account for about a quarter of the total health expenditure. This paper compares healthcare utilization and the incidence of CHE among improved Community Health Fund (iCHF) members and non-members in central Tanzania.A survey was conducted in 722 households in Bahi and Chamwino districts in Dodoma region. CHE was defined as a household health expenditure exceeding 40% of total non-food expenditure (capacity to pay). Concentration index (CI) and logistic regression were used to assess the socioeconomic inequalities in the distribution of healthcare utilization and the association between CHE and iCHF enrollment status, respectively.50% of the members and 29% of the non-members utilized outpatient care in the previous month, while 19% (members) and 15% (non-members) utilized inpatient care in the previous twelve months. The degree of inequality for utilization of inpatient care was higher than for outpatient care . Overall, 15% of the households experienced CHE, however, when disaggregated by enrollment status, the incidence of CHE was 13% and 15% among members and non-members, respectively. The odds of iCHF-members incurring CHE were 0.4 times less compared to non-members . The key determinants of CHE were iCHF enrollment status, health status, socioeconomic status, chronic illness, and the utilization of inpatient and outpatient care.The utilization of healthcare services was higher while the incidence of CHE was lower among households enrolled in the iCHF insurance scheme relative to those not enrolled. More studies are needed to establish the reasons for the relatively high incidence of CHE among iCHF members and the low degree of healthcare utilization among households with low socioeconomic status.The online version contains supplementary material available at 10.1186/s12889-023-16509-7. Globally, the proportion of total health expenditure is less than 10% of the Gross Domestic Product (GDP). Additionally, the proportion of out-of-pocket (OOP) health expenditure has remained above 40% of the total health spending in low and middle-income countries (LMICs) . From 20The majority of people in some LMICs, particularly low-income earners rely on public health facilities for affordable services . HoweverThe challenge of raising sufficient funds to finance healthcare is one of the major reasons for LMICs not being able to meet the healthcare needs of their citizens , 11. ComIn Tanzania, the CBHI scheme, commonly referred to as Community Health Fund (CHF), was introduced in 1996 to enhance access to primary healthcare services among rural and informal workers . DespiteThe existing literature highlights a range of factors associated with CHE and the variation in the prevalence across countries. In Tanzania, three studies have assessed the incidence of CHE using the National Household Budget Surveys and they found that about 0.4% and 2.7% of the population experienced CHE at the 40% threshold of non-food expenditure (capacity to pay) \u201326. BrinPrevious studies on the determinants of CHE in various LMICs have primarily focused on demographic characteristics, disease patterns, and health-seeking behaviors. Some studies refer to higher age, higher educational level, sex of the household head, and occupation , 34\u201338, A few studies have also explored the relationship between insurance status and CHE , 39\u201341. Tanzania is currently considering implementing a mandatory health insurance scheme to raise additional funds for health . TherefoA cross-sectional study was used to collect primary data from Bahi and Chamwino Districts in Dodoma region between June to August 2019. Dodoma contains seven districts with a total population of nearly 2.3 million, of which 330,543 and 221,645 live in Chamwino and Bahi, respectively, according to the 2012 census . The proA multistage sampling method was used to identify study participants. First, the two study districts were selected out of the seven districts in Dodoma. Second, four and five divisions were selected from Bahi and Chamwino, respectively. Third, for each division, two wards were selected, thus making a total of eight wards for Bahi and ten wards for Chamwino. Finally, 16 and 20 villages were selected from the wards in Bahi and Chamwino, respectively. The probability-proportional-to-size sampling approach was employed to obtain the sample size for each district by dividing the number of households in each district by the total number of households in the two districts multiplied by the estimated sample size (722), as explained in . Out of Six research assistants were trained for three days, followed by pretesting of the tools. Data were collected by these trained research assistants between June and August 2019. The questionnaire for this study was adapted from different sources \u201348. The The outcome variable was catastrophic health expenditure (CHE), which was defined as any health expenditure (HE) that exceeds 40% share of the total non-food expenditure , 51. Thep\u2009<\u20090.05 [To measure the socioeconomic inequality in the distribution of healthcare utilization among the iCHF members and non-members, we plotted the concentration curves and estimated the concentration index (CI) that ranges between -1 and 1. A positive value indicates a higher incidence among those in higher SES while a negative value would indicate a higher incidence among those in the lower SES . To testp\u2009<\u20090.05 . To calcp\u2009<\u20090.05 \\documententclass1pt{minimaTable From Table P\u2009<\u20090.000) while there is no statistical difference in the utilization of IPD care (P\u2009<\u20090.239).When healthcare utilization was categorized by enrollment status and types of care sought, it follows from Table Furthermore, the proportion of insured households which utilized outpatient services and paid through OOP was 14% while 28% used more than one payment modality. This was not the case for the noninsured households where 66.7% and 12.3% of the households used OOP and the combination of different payment modalities, respectively. Concerning the inpatient care and the payment modality, we found that 17.1% and 41.5% of the insured households and 68.8% and 18.2% of the noninsured households incurred OOP expenditure alone or used more than one payment modality, respectively.As shown in Fig.\u00a0Figure\u00a0p\u2009<\u20090.001 for both OPD and IPD care, suggesting that the noninsured strongly dominate the insured with respect to the utilization of healthcare services. From a visual inspection of Fig.\u00a0The dominance test was statistically significant at The overall incidence of CHE was 15%; however, when disaggregated by enrollment status, the incidence was 15% among the noninsured and 13% among the insured. From Fig.\u00a0The regression results are presented in Table For the socioeconomic status (SES), households with at least one member with chronic illness, and households with at least one member that had received IPD care, or OPD care, were more likely to experience CHE. SES was positively associated with CHE, however the odds ratio first increased from the lowest to the average/middle), then decreased when moving to high and, again increased when moving to the highest SES. Households that belonged to the low, average/middle, and the highest SES were 2.45, 4.05, and 2.43 times more likely to incur CHE compared to those belonging to the lowest SES. Not surprisingly, the odds ratios for OPD and IPD are very high. Households that received inpatient care were 37.69 times higher likely to incur CHE compared to their counterfactuals while for those who received outpatient services the odds ratio was 9.18 times higher relatively to those that did not .This paper compared healthcare utilization and the incidence of catastrophic health expenditure (CHE) among households enrolled into the improved Community Health Fund (iCHF) and those not enrolled. This topic is of considerable interest given the ongoing Tanzanian efforts to reach Universal Health Insurance coverage. The incidences of CHE provide us with insights about the ability of a health system to provide risk financial protection for its citizens as well as the financial burdens that are carried by households.Our findings show that the insured households utilized healthcare services (both outpatient and inpatient) to a higher degree than the noninsured households. One of the advantages of voluntary health insurance is to provide financial risk protection and improve healthcare accessibility , 53\u201355. A second observation is that households in the highest SES class utilized both outpatient and inpatient services more frequently than those in the lowest SES class and were also more likely to incur CHE. These findings are in line with studies conducted in Nigeria and Mongolia , 57, 58.In this study, we found that overall, 15% of the households experienced CHE at a 40% threshold of the capacity to pay (non-food expenditure). This incidence is smaller compared with the 26.6%, which was reported by Macha (2015) but quite similar to 18% reported by Brinda et al., (2014), both in Tanzania , 10. fThOur results show that the incidence of CHE was higher among the noninsured households than the insured. This is not surprising, since health insurance per definition provides financial risk protection. However, quite a high share of insured households were also confronted with CHE. We can only speculate that these households purchased healthcare services that were not included in the iCHF benefit package or because medicines were out-of-stock forcing them to purchase from private pharmacies and drug shops. Furthermore, treatments for some common Non-Communicable Diseases (NCD) are not covered by Ichf scheme, meaning that OOP remains the only option to finance such expenditures. Our findings are similar to the findings of other studies which also found that CHE was more pronounced among the noninsured households compared to the insured households , 62, 63.The study found that CHE was influenced by socio-economic variables, healthcare variables, and health-related variables. For the socioeconomic variables, CHE was associated with age (60\u2009+\u2009groups), education (secondary education and above), marital status (married), and SES. For the healthcare variables, CHE was associated with a household having at least one member who received inpatient care in the last 12\u00a0months or outpatient care in the last month. For the health-related variables, CHE was associated with households having at least one member suffering from chronic diseases and a household head that report having a good health status.A negative relationship was observed between the age of the household head and CHE. This suggests that, as the age of the household head increases, the likelihood of experiencing CHE decreases. A possible explanation for this could be the exemption policy that matters for the elderly, which excuses them from paying OOP at public health facilities. Similar findings were reported in a previous study that identified an inverse relationship between higher age and CHE . HoweverOur results have also revealed that a higher educational level (secondary level and above) and being married were negatively associated with CHE. A study conducted in China found that the incidences of CHE decreased with a higher educational level . The expThe results show that SES typically has a positive association with CHE, although the odds were not consistent across all classes. This provides a clear picture that the average household is more vulnerable to CHE due to a combination of income and spending where those with low SES are less likely to access care, unlike the ones with high SES who are more likely to access care because they can afford it. Another possible explanation could be that as SES increases, so does the household capacity to pay for health care, which may translate to more OOP payment without exposing them to CHE compared to those with low SES whose budgets are more constrained and hence becomes difficult to visit health facilities when sick. Our findings are in line with other studies, which also found that low SES increased the probability of households incurring CHE , 64, 66.Self-reported health status and households having at least one member with chronic diseases were found to be associated with CHE, same as households having at least one member who sought IPD or OPD care. These findings are in line with what has been reported by other studies , 34, 39.This study was faced with some limitations, we, therefore, request caution with the interpretation of its findings. First, this was a cross-sectional study conducted in two districts in one region, which limits the generalization of the results beyond the study districts. Secondly, the health expenditure data reported by the study participants may have been misrepresented due to recall bias. Respondents were asked to state the quantity of resources purchased or the expenditure on food, non-food items and health services in the past 4\u00a0weeks, or the past 12\u00a0months. We feel that it might have been difficult for the respondent to accurately remember the value and quantities of some consumed items. Another reason for underestimation is that we only took into consideration those who had visited the health facilities within the last month for OPD care or last year for IPD care. If the respondent had not visited the health facility, then the expenditure was not captured. Despite these limitations, our findings are robust in the sense that they are comparable to previous studies that used the same methodology. Furthermore, household expenditures rather than household income is in the literature considered to be the most reliable measure of wealth status for study settings like ours because people in the informal sector often have no formal or reported income sources, which might result in measurement error \u201370.The study found that the utilization of healthcare services was relatively higher and the incidence of CHE was lower among households enrolled in the iCHF insurance scheme compared to those not enrolled into the scheme. Despite the odds of an insured household incurring CHE being lower compared to noninsured households, we found that being insured did not eliminate the possibility of experiencing CHE. Therefore, more studies are needed to establish the reasons behind the relatively high incidence of CHE among insured households. Our findings also show that healthcare utilization and incidence of CHE were lower among households with low SES compared to those with higher SES. Therefore, researchers and policymakers must seek to identify other possible barriers beyond enrollment into health insurance that hinder the utilization of healthcare services among households with low SES when formulating policies for Universal Health Coverage in Tanzania.Additional file 1.\u00a0Proposed household questionnaire on insurance status, health status, access to healthcare, expenditures, socioeconomic status, and demographic characteristics.Additional file 2.\u00a0Model output for Multivariate Logistic regression.Additional file 3.Data collection and variable measure."} +{"text": "Objectives: This study was performed to develop a population pharmacokinetic model of pyrazinamide for Korean tuberculosis (TB) patients and to explore and identify the influence of demographic and clinical factors, especially geriatric diabetes mellitus (DM), on the pharmacokinetics (PK) of pyrazinamide (PZA).Methods: PZA concentrations at random post-dose points, demographic characteristics, and clinical information were collected in a multicenter prospective TB cohort study from 18 hospitals in Korea. Data obtained from 610\u00a0TB patients were divided into training and test datasets at a 4:1 ratio. A population PK model was developed using a nonlinear mixed-effects method.Results: A one-compartment model with allometric scaling for body size effect adequately described the PK of PZA. Geriatric patients with DM (age >70\u00a0years) were identified as a significant covariate, increasing the apparent clearance of PZA by 30% , thereby decreasing the area under the concentration\u2013time curve from 0 to 24\u00a0h by a similar degree compared with other patients . Our model was externally evaluated using the test set and provided better predictive performance compared with the previously published model.Conclusion: The established population PK model sufficiently described the PK of PZA in Korean TB patients. Our model will be useful in therapeutic drug monitoring to provide dose optimization of PZA, particularly for geriatric patients with DM and TB. In the era of the coronavirus disease 19 (COVID-19), tuberculosis (TB) remains a deadly threat globally via single infection and potential coinfection with COVID-19 . AccordiMycobacterium tuberculosis during the early stages of treatment (0-24) of PZA is an important predictor of early culture conversion and good bactericidal activity (max) of PZA ranging from 20\u201460\u00a0\u03bcg/mL has been linked to the lower risk of treatment failure as the main components, resulting in rapid improvement of clinical symptoms . Among treatment . The areactivity . Additio failure . ConsideCurrently, the TB treatment guidelines follow body weight-based dosing. In Korea, the recommended daily dose of PZA for treatment of drug-susceptible TB is 20\u201330\u00a0mg/kg, with a maximum dose of 2000\u00a0mg . NonethePrevious studies have shown that the PZA concentration is influenced by many factors, including genetic polymorphisms, age, comorbidities, and body weight . VinnardTherapeutic drug monitoring (TDM) is useful for optimizing drug therapy by providing a patient-tailored dose according to their PK/pharmacodynamic (PD) results . The appclinicaltrial.gov with the clinical trial number NCT05280886. Ethical approval was obtained from the institutional review board of each clinical site involved in the study. All patients provide written informed consent to participate in the study.This study was performed in accordance with the tenets of the Declaration of Helsinki and the guidelines of our institution. The current study was part of a multicenter prospective observational cohort study to develop personalized pharmacotherapy for TB patients and was conducted in 18 hospitals in Korea. We provided therapeutic drug monitoring procedure to the enrolled patients; therefore, we have registered our study in M. tuberculosis and resistance to rifampin simultaneously, culture test and/or acid-fast bacilli (AFB) staining. The PZA dosing regimen followed the current Korean guidelines for TB treatment. Patients who were nonadherent or not in steady-state were excluded. The demographic characteristics of the enrolled patients, anti-TB drug treatments, comorbidities, TB diagnosis, co-medications, and laboratory testing results were collected.Patients aged >18\u00a0years diagnosed with drug-susceptible TB and receiving a PZA-based regimen for at least 2\u00a0weeks were enrolled in the study. The enrolled patients received an oral daily dose of PZA in the range of 20\u201330\u00a0mg/kg and rounded to the closest tablet size as prescribed by the physician. All patients underwent sputum testing for bacteriologically confirmed diagnosis, which included the use of Xpert MTB/RIF testing capable of detecting Blood samples (5\u00a0mL) were randomly collected between 0 and 24\u00a0h after the last PZA administration and were stored in heparin tubes. Typically, one sample was usually drawn from outpatients, whereas at least two samples among pre-dose and 1, 2, and 5\u00a0h after the last dose were drawn from inpatients. A portion of each 3-mL blood sample was centrifuged at 2,000\u00a0g at 4\u00b0C for 10\u00a0min to obtain plasma. The plasma was harvested within 2\u00a0h after blood sampling and stored at a temperature below \u221270\u00b0C until used for drug concentration measurements. The remaining 2\u00a0mL of each blood sample was stored for genotyping related to the PK of other anti-TB drugs used to treat the patients.The plasma concentration of PZA was measured using a validated high-performance liquid chromatography\u2013electrospray ionization\u2013tandem mass spectrometry method as described previously by our group . The prep < 0.01 were added sequentially.Population PK analysis was performed using NONMEM software , and PK parameters were calculated using first-order conditional estimation via \u025b-\u03b7 interaction. R software was used to analyze the data and generate graphs. The PZA plasma concentrations below the LLOQ were imputed to half of the LLOQ (1\u00a0\u03bcg/mL) according to Beal\u2019s M5 method . NonadheP), ith patient, Age, body weight, lean body weight, albumin, and total bilirubin were included as continuous covariates. Meanwhile, sex, fasting or food intake status, DM, liver disease, renal disease, and geriatric DM were investigated as categorical covariates of PK parameters. The effects of continuous covariates were explored using the power function with the following equation:ith categorical covariate . External validation was conducted using a test dataset that was not included in the model development. The predicted concentrations were compared with the observed concentrations using the population or the individual PK parameters estimated using the Bayesian method. The predictive performance of the final model was evaluated by comparing the model prediction errors, such as mean prediction error and absolute prediction error, with those of previously published population PK models. The criteria of previously published population PK model selection as follows: 1) similar model structure, and 2) the model established from different ethnicities compared to the study population. The external validation aims to compare and observe the final model performance with the other published models from different ethnicities when implemented in the same ethnicity with the study population. The equations used to calculate the prediction errors were as follows:2). In addition, 110 patients were older than 70\u00a0years. The median body weight and lean body weight of this elderly population were 55.5\u00a0kg (range: 28.8\u201381.0\u00a0kg) and 44.5\u00a0kg (23.14\u201358.77\u00a0kg), respectively. Among the elderly population, 23 patients had DM, 11 had renal disease, and 2 had liver disease. The baseline patient characteristics are presented in A total of 613 patients were enrolled, and their plasma PZA concentration measurements were used to establish the model. Each patient contributed for one sampling point at random post dose time. The study population had a median age of 54\u00a0years (range: 19\u201396\u00a0years), body weight of 60.8\u00a0kg (range: 28.8\u201395.3\u00a0kg), and lean body weight of 48.1\u00a0kg (range: 23.1\u201363.79\u00a0kg), and the proportion of male patients was approximately 67%. In the total patient population, 55 patients had DM, 15 had liver disease, and 26 had renal disease . While IIV in both CL/F and Ka were estimated, the IIV in Vd/F was fixed to stabilize the model and obtain minimization successful. The value used to fix the IIV of Vd/F were obtained from the result of model running prior to fixing the IIV value. Allometric scaling was included for both the CL/F and Vd/F using lean body weight as a predictor of body size. The inclusion of allometric scaling with lean body weight into the base model was based on a significant reduction in OFV (\u2206OFV: 112.3) in comparison with using total body weight (\u2206OFV: 88.5). Several absorption models that were evaluated did not improve model performance and thus were not included in further analysis.p-value <0.001) and 2.2% decreased in IIV. Given the demographic trend of aging in the Korean population, the incorporation of a covariate representing individuals aged 70 years or older with DM would be more relevant. The CL/F value was estimated as 5.9\u00a0L/h for patients aged \u226570\u00a0years old with DM and as 4.49\u00a0L/h for other patients. The estimated values of Vd/F and Ka were 44.2\u00a0L and 1.49 h\u22121, respectively. The estimated PK parameters of PZA and the NONMEM code of final model are shown in We evaluated the covariates of age, height, sex, feeding status, AST, ALT, albumin, total bilirubin, DM, advanced age , renal disease, and liver disease. However, none of these covariates improved the OFV. Therefore, as most TB patients in Korea are of advanced age, additional evaluation was performed using combinations of age \u226560, \u226565, and \u226570\u00a0years with comorbidities, such as DM, renal disease, and liver disease. Among these covariate groups, only age \u226560, \u226565, and \u226570\u00a0years combined with DM had a significant effect on the CL/F of PZA, in which the combination of age \u226570\u00a0years with DM showed the largest reduction of OFV by 151.3 points , 131.1\u00a0\u03bcg\u00a0h/mL (IQR: 122.3\u2013131.1\u00a0\u03bcg\u00a0h/mL), 99.87\u00a0\u03bcg\u00a0h/mL (IQR: 95.67\u2013116.27\u00a0\u03bcg\u00a0h/mL), and 138.4\u00a0\u03bcg\u00a0h/mL (IQR: 130.3\u2013159.6\u00a0\u03bcg\u00a0h/mL), respectively. The Cmax values were 21.86\u00a0\u03bcg/mL (IQR: 20.22\u201323.94\u00a0\u03bcg/mL), 23.88\u00a0\u03bcg/mL (IQR: 21.26\u201327.11\u00a0\u03bcg/mL), 21.28\u00a0\u03bcg/mL (IQR: 19.8\u201322.37\u00a0\u03bcg/mL), and 25.92\u00a0\u03bcg/mL (IQR: 22.62\u201330.6\u00a0\u03bcg/mL), respectively. These findings indicated that the exposure to PZA in elderly patients with DM would be significantly decreased due to the higher CL/F. Furthermore, we found that DM increased the CL/F of PZA regardless of age. In addition, in the absence of DM, the CL/F of PZA tended to be decreased in the elderly population compared with younger patients. Taking both covariates into account, advanced age and the presence of DM greatly increased the CL/F of PZA. Therefore, the higher CL/F of PZA may be affected by DM as a comorbidity.The estimated AUCTo our knowledge, there have been few studies regarding the interaction effect between advanced age and comorbidities in TB. As half of the new cases of TB in Korea were identified in elderly patients, there were concerns about the interaction effect of comorbidities and age that may alter the PK of anti-TB drugs, resulting in a poor treatment outcome or risk of adverse drug reactions. In this study, a PZA population PK model was developed to investigate the effects of age and other crucial clinical characteristics of Korean TB patients. Our one-compartment structural model with first-order absorption\u2013elimination and additive residual error described the PK of PZA well. Our model was consistent with other models reported previously . AllometAlthough direct comparison between our study and those mentioned studies were not suitable due to different body size descriptors used in the model, the incorporation of allometric scaling marked the similarity of model structure. The median total body weight and lean body weight value of our study were almost similar with In addition to lean body weight, geriatric DM contributed to the IIV in the PZA concentration. We found that age, in terms of elderly patients (\u226570\u00a0years old), and DM had an explanatory effect on the IIV in the CL/F of PZA in Korean TB patients using a mixed-effects model. This significant effect of advanced age with DM on the CL/F of PZA was distinguished from previous population PK studies . In the Despite that, several studies have linked DM to a poor TB outcome and an increased risk of TB infection . It has Controlling DM in elderly patients is challenging . The phyM. tuberculosis may differ from region to region. Thus, the efficacy targets of AUC0-24 \u2265 363\u00a0\u03bcg\u00a0h/mL and/or Cmax \u2265 30\u00a0\u03bcg/mL of PZA were commonly used to adjust the dose due to its association with good treatment outcomes DM as an important covariate for the CL/F of PZA. We found that the geriatric DM population had a higher CL/F of PZA and lower exposure of PZA compared with other patients. The population PK model that we developed can be further used to optimize TB treatment via MIPD-based TDM implementation."} +{"text": "Community\u2010acquired pneumonia (CAP) is a serious clinical concern. A lack of accurate diagnosis could hinder pathogen\u2010directed therapeutic strategies. To solve this problem, we evaluated clinical application of nested multiplex polymerase chain reaction (PCR) in children with severe CAP. We prospectively enrolled 60 children with severe CAP requiring intensive care between December 2019 and November 2021 at a tertiary medical center. Nested multiplex PCR respiratory panel (RP) and pneumonia panel (PP) were performed on upper and lower respiratory tract specimens. We integrated standard\u2010of\u2010care tests and quantitative PCR for validation. The combination of RP, PP, and standard\u2010of\u2010care tests could detect at least one pathogen in 98% of cases and the mixed viral\u2010bacterial detection rate was 65%. The positive percent agreement (PPA), and negative percent agreement (NPA) for RP were 94% and 99%; the PPA and NPA for PP were 89% and 98%. The distribution of pathogens was similar in the upper and lower respiratory tracts, and the DNA or RNA copies of pathogens in the lower respiratory tract were equal to or higher than those in the upper respiratory tract. PP detected bacterial pathogens in 40 (67%) cases, and clinicians tended to increase bacterial diagnosis and escalate antimicrobial therapy for them.\u00a0RP and PP had satisfactory performance to help pediatricians make pathogenic diagnoses and establish therapy earlier. The pathogens in the upper respiratory tract had predictive diagnostic values for lower respiratory tract infections in children with severe CAP. For children under the age of five, it has been estimated that approximately 900\u2009000 children died of pneumonia in 2015 globally, accounting for 15% of child mortality.In clinical practice, diagnostic tests are mainly based on standard\u2010of\u2010care (SOC) tests, such as microbial cultures, serological detection, and fluorescent immunoassays, which are relatively time\u2010consuming and have low sensitivity. In addition, the diagnostic rate is limited due to prior antibiotic usage and the difficulty in collecting high\u2010quality lower respiratory tract specimens, particularly in young children who are often uncooperative. The lack of an accurate microbial diagnosis might hinder well\u2010established pathogen\u2010directed treatment plans. Presumably, lower respiratory tract infections in children usually result from the replication and spread of pathogenic viruses and bacteria from the upper respiratory tract, which might invade the mucosa and lower airways, resulting in clinical inflammation. Therefore, it is worthwhile to verify whether upper respiratory tract pathogens could predict lower respiratory tract infection in young children, among whom it is difficult to obtain high\u2010quality lower respiratory tract specimens.Currently, the development of nucleic acid amplification detection has rapidly promoted the identification of viral or microbial pathogens, which is of great significance for the treatment and control of infection.22.1Between December 2019 and November 2021, we prospectively enrolled children (under 18 years of age) with severe CAP, defined as CAP requiring admission to pediatric ICUs at National Taiwan University Hospital (NTUH), a 2600\u2010bed tertiary medical center. The inclusion criteria were based on clinical evidence of acute lower respiratory tract infection and positive radiological evidence of pneumonia on chest X\u2010ray (CXR) or computed tomography (CT) within 48\u2009h after admission , or bronchoalveolar lavage (BAL), were collected by trained staffs from all enrolled children within 48\u2009h after admission. All respiratory samples were transported to the clinical microbiology and virology laboratories of NTUH, which were accredited by the College of American Pathologists and the Taiwan Accreditation Foundation, for SOCdiagnostic tests such as microbial cultures, serological tests, fluorescent immunoassays, and PCR. Clinical characteristics, results of SOC diagnostic tests, and electronic medical record (EMR) data were collected for enrolled children.3FilmArray\u00ae BioFire\u00ae Pneumonia Panel (PP) and Respiratory Panel 2.1 (RP) are syndrome specific, cartridge\u2010based, nested multiplex PCR performed in an automated manner with results available in approximately 1\u2009h\u00a0(bioM\u00e9rieux).3.1To orthogonally validate the results of the PP and the RP, two strategies were employed. First, respiratory specimens were sent for Gram staining and culture to detect common culturable pathogens according to standard protocols. Second, for difficult\u2010to\u2010cultivate microorganisms, real\u2010time quantitative PCR (qPCR)\u2010based nucleic acid detection was applied. The methods and primer sets for nine viruses and four bacteria are described in Supporting information: Table\u00a03.2\u03c72 analysis. The PP and RP were considered concordant, such as true positive (TP) or true negative (TN), when they were consistent with the results from the corresponding orthogonal validation tests. Microorganisms identified only by RP or PP and not by the orthogonal validation tests were considered false\u2010positives (FP), and vice versa were considered false negatives (FN). The diagnostic agreement between the multiplex PCR panels and orthogonal validation tests was measured for each pathogen in the form of positive percent agreement (PPA\u2009=\u2009TP/[TP\u2009+\u2009FN]), negative percent agreement (NPA\u2009=\u2009TN/[TN\u2009+\u2009FP]), and overall percent agreement (OPA\u2009=\u2009[TP\u2009+\u2009TN]/[TP\u2009+\u2009FP\u2009+\u2009TN\u2009+\u2009FN]).As the primary study endpoint, to evaluate the diagnostic performance of this novel diagnostic technology, pathogen detection rates, positive percent agreement (PPA), and negative percent agreement (NPA) were assessed. To compare the pathogen detection rates between the nested multiplex PCR panels and SOC diagnostic tests, the proportion of pathogen types was calculated by As the secondary endpoint, pathogens and their DNA or RNA copies were compared between the upper and lower respiratory tracts. The distribution and concordance rates of pathogens in the upper and lower respiratory tracts were assessed. To assess the efficiency of pathogen detection for upper and lower respiratory tract specimens, the McNemar test was used. To evaluate the DNA or RNA copies of pathogens from qPCR between the upper and lower respiratory tract, the Wilcoxon signed rank test was used.\u03c72 tests or Fisher's exact test was used to measure the difference. The Shapiro\u2013Wilk test was used to test the distribution of continuous variables. Mean (standard deviation) and t\u2010tests were used for data with a normal distribution, and median (interquartile range) and Mann\u2013Whitney U tests were used for data with a non\u2010normal distribution. A multivariable analysis was performed to identify the most significant factors associated with the clinical outcome or severity. p values less than 0.05 were considered statistically significant. SPSS (version 24) was used for statistical analysis.For the analysis of clinical outcome, the pediatric sequential organ failure assessment score (pSOFA score) was used to grade organ dysfunction in pediatric patients.44.1p\u2009<\u20090.01] and pSOFA score , adjusted for age and sex.From December 2019 to November 2021, a total of 60 children with severe CAP admitted to the ICU were enrolled. The patients were divided into critical cases and severe cases . Their demographic data and characteristics are shown in Table\u00a04.2p\u2009<\u20090.01). After integration of the SOC diagnostic tests, PP and RP, the detection rates of at least one potential pathogen noticeably increased to 98%, and the mixed viral\u2010bacterial detection rate increased to 65%.As shown in Figure\u00a0Staphylococcus aureus, Streptococcus pneumoniae, and Moraxella catarrhalis. Four cases with S. aureus, M. catarrhalis, or Pseudomonas aeruginosa were not detected in the initial cultures but were positive in the subsequent microbial cultures several days after the PP test.As shown in Figure\u00a0Mycoplasma pneumoniae, except human metapneumovirus and influenza B virus .To elucidate the relationship between the upper and lower airways, we integrated PP, RP, SOC diagnostic tests and quantitative PCR to comprehensively present the pathogens detected in this study .Furthermore, we analyzed paired qPCR DNA or RNA copies of five viruses, three bacteria, and one atypical bacterium in upper and lower respiratory tract specimens Figure\u00a0. Overall4.4p\u2009=\u20090.01), viruses or mixed viral and bacterial pathogens were detected in induced sputum than in ETAs. In 40 cases with bacterial pathogens detected in PP, clinicians were more likely to make the diagnosis of bacterial infection , escalate antimicrobial treatment , and increase infection control policies than in 20 cases without bacteria detected in PP. Among all 60 enrolled cases, three cases required surgical intervention due to empyema, and two cases died.In an analysis of clinical outcomes, 27 of 30 critical cases (90%) required endotracheal ventilator support. Therefore, the specimen types were ETA in 27 cases and induced sputum in 33 cases. From the results of PP, more bacteria . In the real world, however, obtaining high\u2010quality lower respiratory tract specimens from children with lower respiratory tract infections is difficult. Therefore, the use of upper respiratory tract specimens combined with highly sensitive pathogenic molecular detection techniques might be a potential diagnostic solution.In children, lower respiratory tract infections might be induced by pathogenic viruses and bacteria in the upper respiratory tract. Teo S.M., et al. reported that the nasopharyngeal microbiome in infants could impact the severity of lower respiratory infection.We found that clinicians tended to adjust their clinical diagnosis and management strategies when bacterial pathogens were detected in PP. Some bacteria grew in subsequent microbial cultures several days after the PP test. These findings suggest that initial detection by nucleic acid detection techniques such as PP or qPCR might have significant clinical implications in children, especially those suffering from severe CAP.In our study, 53% (32/60) of children had human rhinovirus detected in their lower respiratory tract specimens, and six children had human rhinovirus (hRV) as the only detected pathogen. In addition, hRV qPCR RNA copies were significantly higher in lower respiratory tract specimens than in upper respiratory tract specimens. Human rhinovirus is the most common cause of respiratory diseases in children, accounting for more than half of acute upper respiratory tract infections,6The strength of our study was the prospective combination of FilmArray RP and PP for the detection of upper and lower respiratory tract pathogens in pediatric ICU patients, but there were some limitations. First, the study period coincided with the COVID\u201019 pandemic. Due to social distancing and wearing masks, the prevalence of influenza and other respiratory viruses decreased, and the characteristics of infections during epidemics might be different from those during the non\u2010epidemic period. Second, we designed qPCR orthogonal validation experiments for most of the viruses and bacteria covered by PP, but there were some viruses and bacteria with a low prevalence, and these pathogens were not included in the analysis. Third, this was a prospective single\u2010center observational cohort study, and a multicenter randomized controlled study is required to verify the potential value of its clinical application in the future.7Our study demonstrated that a new molecular diagnostic technique, nested multiplex PCR RP and PP, had powerful diagnostic performance and could help clinicians make pathogenic diagnoses and start specific antimicrobial therapy in a timely manner. Since lower respiratory tract specimens are difficult to obtain, upper respiratory tract specimens combined with molecular diagnostics might break the diagnostic barrier in children.Ting\u2010Yu Yen and Luan\u2010Yin Chang developed the idea, designed the study, and were responsible for the accuracy of the data analysis. Yen\u2010Yu Yen and Jian\u2010Fu Chen conducted experiments, collected and analyzed data, and created graphs and tables. Chun\u2010Yi Lu, En\u2010Ting Wu, Ching\u2010Chia Wang, and Frank Leigh Lu enrolled study patients and collected clinical specimens. Ting\u2010Yu Yen, Jian\u2010Fu Chen, and Luan\u2010Yin Chang wrote the manuscript. Li\u2010Min Huang and Luan\u2010Yin Chang provided advice and suggestion for improving the manuscript. All authors reviewed and approved the final manuscript.The authors declare no conflict of interest.Supplementary information.Click here for additional data file.Supplementary information.Click here for additional data file." \ No newline at end of file