diff --git "a/deduped/dedup_0178.jsonl" "b/deduped/dedup_0178.jsonl" new file mode 100644--- /dev/null +++ "b/deduped/dedup_0178.jsonl" @@ -0,0 +1,53 @@ +{"text": "As part of a long-term initiative to improve cancer surveillance in New York State, small area maps of relative risk, expressed as standardized incidence ratios (SIRs), were produced for the most common cancers. This includes prostate cancer, the focus of this paper, since it is the most common non-dermatologic malignancy diagnosed among men and the second leading cause of cancer deaths for men in the United States.ZIP codes were chosen as mapping units for several reasons, including the need to balance between protecting personal privacy and public demand for fine geographic resolution. Since the population size varies greatly among such small mapping units, hierarchical Bayes spatial modelling was applied in this paper to produce a map of smoothed SIRs. It is further demonstrated how other characteristics of the large sample from the stationary posterior distribution of SIRs can be mapped to investigate various aspects of the statewide spatial pattern of prostate cancer incidence.Thematic mapping of the median and 95 percentile range of SIRs provided, respectively, a map of spatially smoothed values and the uncertainty associated with these smoothed values. Maps were also produced to identify ZIP codes expressing a 95% probability, in the Bayesian paradigm, of being less than or greater than the null value of 1.The model behaved as expected since areas that were statistically elevated coincided with areas identified by the spatial scan statistic, plus the relative uncertainty increased as a ZIP code's population decreased, with an exaggerated effect for low population ZIP codes on the edge of the state border.The overall smoothed pattern, along with identified high and low areas, may reflect difference across the state with respect to socio-demographics and risk factors; however, this is confounded by potential differences in screening and diagnostic follow-up. Nevertheless, the Bayes modelling approach is shown to provide not only smoothed results, but also considerable other information from a large empirical distribution of outcomes associated with each mapping unit. Geographic surveillance of chronic disease is central to understanding spatial or spatial-temporal patterns that may help to identify discrepancies in disease burden among different regions or communities. As part of ongoing efforts in New York State to understand spatial patterns of cancer and to help implement cancer prevention and control programs, small area maps of cancer relative risk, expressed as standardized incidence ratios (SIRs), have been produced and shared with the public for the Prostate cancer, the focus of this paper, was included because it is the most common non-dermatologic malignancy diagnosed among men and the second leading cause of cancer deaths for men in the United States (US) . AlthougResults for prostate cancer are reproduced in Figure It is well recognized that the stability of population-based statistics like the SIRs in Figure y, as a random variable that has arisen from a probability distribution with expectation \u03b8. This expectation is modeled, via an appropriate link g(\u00b7), as a linear function g(\u03b8) = \u03b1 + \u03b2 x'+ \u03b5, for a common value \u03b1, explanatory covariates \u03b2 x'and a random effect \u03b5 that captures unexplained variation. If the random effect is associated with exchangeable spatial heterogeneity, estimates are smoothed towards a global mean, whereas if the random effect is associated with local spatial autocorrelation, estimates are smoothed towards a local neighborhood mean, which is typically more meaningful in geographic epidemiology. There are different approaches to modelling local spatial dependence, and section 6.3 of Cressie , where l is the mean maximum achievable log likelihood, obtained for a saturated model where a parameter is assigned to each datum, and l is the mean log likelihood obtained for the model in question. This takes the conventional assessment of deviance for generalized linear models [Variations of the model defined above were compared by evaluating the mean deviance of 1000 iterations chosen from the three independent Markov Chains after burn-in. This was done by obtaining the mean of -2(log likelihood) for each iteration, as provided by the r models and applr models .Incorporating a random effect associated with local spatial structure (CAR term) provides much stronger prior information than the exchangeable random effect alone Table , which a"} +{"text": "In vivo evaluation requires invasive imaging procedures that cannot be repeated serially.Endothelial function in hypercholesterolemic rabbits is usually evaluated ex vivo measurements.We evaluated a non-invasive ultrasound technique to assess early endothelial function in rabbits and compare data with in vivo by transcutaneous vascular ultrasound of the abdominal aorta. Ex vivo endothelial function was evaluated on isolated aortic rings and compared to in vivo data.Twenty-four rabbits (fed with a cholesterol diet (0.5%) for 2 to 8 weeks) were given progressive infusions of acetylcholine (0.05\u20130.5 \u03bcg/kg/min) and their endothelial function was assessed ex vivo results.Significant endothelial dysfunction was demonstrated in hypercholesterolemic animals as early as 2 weeks after beginning the cholesterol diet . Unexpectedly, response to acetylcholine at 8 weeks was more variable. Endothelial function improved in 5 rabbits while 2 rabbits regained a normal endothelial function. These data corroborated well with in vivo by transcutaneous vascular ultrasound of the abdominal aorta in the rabbit and results correlate well with ex vivo data.Endothelial function can be evaluated non-invasively Historically, evaluation of endothelial function in small animals has been performed on isolated vessel segments, or vessels exposed by surgical procedures. Very few attempts were made to develop a method of analysis of endothelium-dependent relaxation in vivo-5. In thNon-invasive methods to study endothelial function in humans have been used for many years and have yielded an important amount of data -12. Unfoin vivo in rabbits and to compare this non-invasive method with results obtained ex vivo on isolated aortic rings.The objective of the current study was to assess the reliability of transcutaneous vascular ultrasound in order to evaluate endothelial function Acetylcholine, nitroglycerin and sodium nitroprusside were from Sigma . Angiotensin II and endothelin-1 peptides were acquired from Peninsula Laboratories Inc. Guide to the care and use of experimental animals published by the Canadian Council on Animal Care and the protocol was approved by the Animal Protection Committee of the Universit\u00e9 Laval. Sixteen rabbits were divided in two groups (n = 8) and all animals were fed with standard rabbit chow supplemented with 0.5% cholesterol (w/w) for 2 or 8 weeks respectively. The other 8 animals received normal rabbit chow for eight weeks . After 2 weeks, 8 randomly chosen cholesterol-fed rabbits were killed; the others were kept alive for an additional 6 weeks as for the normal control group.Twenty-four male New Zealand White rabbits (3\u20134 kg body weight) were used in this study. Animals were treated in accordance to the ex vivo experiments. Plasma samples were drawn from the marginal ear vein every week and plasma cholesterol levels were determined using a commercially available spectrophotometric assay kit .When animals were sacrificed abdominal and thoracic aortas were excised and immediately rinsed in freshly prepared Krebs buffer in preparation for the Ultrasound evaluation of endothelial function of the abdominal aorta was performed at baseline, 2 weeks and 8 weeks. Rabbits were sedated using midazolam (0.5 mg/kg), butorphanol (0.5 mg/kg) and ketamine (30 mg/kg) IM. Marginal ear vein and artery were cannulated for drug infusions and arterial blood pressure monitoring, respectively Figure . Heart rOnce the imaging of the aorta was considered optimal, the animals received the following drug perfusions I.V sequentially for 2 minutes each: 1) saline at 1 ml/min; 2) acetylcholine (Ach) at 0.05 \u03bcg/ml/min and Ach at 0.5 \u03bcg/ml/min. Nitroglycerin (5 \u03bcg/ml/min) was used as positive control. Typical arterial blood pressure recordings are illustrated in Figure 2 where D is the diameter of the aorta. Area was expressed in percent of change from baseline. Inter and intra-observer variability was assessed on 10 randomly selected studies.Video sequences from the first 15 seconds (void volume) of drug infusion at baseline and between 40 to 60 seconds of drug infusion were digitized and stored on a computer for analysis Figure . Still fAt the end of the protocol rabbits were given a sub-lethal dose pentobarbital (25 mg/kg) and were sacrificed by exsanguination. The middle part of the descending thoracic aorta as well as the abdominal aorta were removed and dissected free of adhering fat and connective tissues. The aorta was placed in warm Krebs solution. Rings of 5 mm thickness were suspended in individual organ chambers filled with 5 ml of oxygenated Krebs (37C pH 7.4). The segments were connected to force transducers and any variations in force were recorded continuously .-9 to 10-5 M), angiotensin II (10-10 to 10-7 M) and endothelin-1 (10-9 to 10-6 M). Results were compared to the initial response obtained with KCL 80 mM.Baseline contractile response was evaluated by a 30 to 60 minutes exposition to KCl (80 mM) where the rings were gradually stretched to a resting tension of 2 g until steady state was reached. Following this initial experiment, contractile capacity was further evaluated by exposing the rings to other vasoconstrictors. Briefly, when the rings had recovered their resting tension after the initial KCL exposure, they were exposed sequentially to cumulative concentrations of L-phenylephrine . Cumulative concentrations of acetylcholine (10-9 to 3 \u00d7 10-6 M) or sodium nitroprusside (10-10 to 3 \u00d7 10-5 M) were used. Sodium nitroprusside was used as a non-endothelial dependant vasodilator while acetylcholine evaluated the endothelial-dependant vasodilatation. Relaxation was expressed as a percent of change from the pre-contracted tension with PE.Relaxation studies were performed after a precontraction with PE . Differences between the various conditions in the Total cholesterol levels were measured weekly in the serum of cholesterol-fed rabbits. As illustrated in figure As expected, saline alone had no effect. Low doses of acetylcholine (ACh 0.05 and ACh 0.5 \u03bcg/kg/min) had only a minor and transient lowering effect on blood pressure. As illustrated in Figure in vivo results, we performed sections isometric contraction-relaxation experiments on isolated aortic rings. In Figure In order to confirm the validity of the We then studied the endothelium-dependent relaxation using acetylcholine on our aortic rings after a pre-contraction with phenylephrine (1 \u03bcM). As illustrated in Figure in vivo, the endothelial function of the animals fed 8 weeks with the cholesterol diet was heterogeneous. In those animals hypercholesterolemia had no effect on the acetylcholine-induced relaxation of thoracic aortic rings while for the abdominal aortic sections; the response to acetylcholine was highly variable. As illustrated in Figure As seen Our results clearly show that endothelial function can be assessed non-invasively by transcutaneous ultrasound of the abdominal aorta in hypercholesterolemic rabbits. The method was easily feasible in all animals and yielded very reproducible results. We also show that this in vivo method correlates very well with the ex vivo evaluation of endothelial function on isolated aortic rings. To our knowledge, this is the first demonstration of such a comparison.ex vivo evaluation of endothelial function on aortic rings [ex vivo data [Ultrasound imaging of the brachial artery in response to reactive hyperaemia has been used in many studies in humans -12. Normic rings . Our metivo data ,14.in vivo and ex vivo. This transient improvement of endothelial function in the early phases of the atherosclerotic process has never been described before to our knowledge and the underlying mechanisms responsible for this paradoxical response need to be explored. This dysfunction may relate to an initial stress response of the aortic endothelium to hyperlipidemia then evolving with the development of atherosclerosis lesions.The extent of endothelial dysfunction observed after 2 weeks of hypercholesterolemic diet was surprising although this parameter has not been studied very much after such a short exposure to hypercholesterolemia in rabbits [in vitro data. A transient improvement in endothelial function can occur after 8 weeks of hypercholesterolemia in some animals for reasons that remain unclear.Endothelial function can be evaluated non-invasively in rabbits using a standard vascular ultrasound probe by a trans-abdominal approach. Results correlate well with"} +{"text": "A population-based case-control study was undertaken in 1997 to investigate the association between tetrachloroethylene (PCE) exposure from public drinking water and breast cancer among permanent residents of the Cape Cod region of Massachusetts. PCE, a volatile organic chemical, leached from the vinyl lining of certain water distribution pipes into drinking water from the late 1960s through the early 1980s. The measure of exposure in the original study, referred to as the relative delivered dose (RDD), was based on an amount of PCE in the tap water entering the home and estimated with a mathematical model that involved only characteristics of the distribution system.In the current analysis, we constructed a personal delivered dose (PDD) model that included personal information on tap water consumption and bathing habits so that inhalation, ingestion, and dermal absorption were also considered. We reanalyzed the association between PCE and breast cancer and compared the results to the original RDD analysis of subjects with complete data.th and >75th percentile when shorter latency periods were considered, and for exposures < 50th and >90th percentile when longer latency periods were considered. Overall, however, the results from the PDD analysis did not differ greatly from the RDD analysis.The PDD model produced higher adjusted odds ratios than the RDD model for exposures > 50The inputs that most heavily influenced the PDD model were initial water concentration and duration of exposure. These variables were also included in the RDD model. In this study population, personal factors like bath and shower temperature, bathing frequencies and durations, and water consumption did not differ greatly among subjects, so including this information in the model did not significantly change subjects' exposure classification. In 1988, an unusually high incidence of cancer in the Cape Cod region of Massachusetts prompted a series of epidemiological studies to investigate possible environmental risk factors associated with the region, including tetrachloroethylene-contaminated drinking water . TetrachA population-based case-control study was undertaken to investigate the association between tetrachloroethylene exposure from public drinking water and breast cancer . The stuBecause PCE is a volatile organic chemical that readily escapes from water into air, the amount of PCE inhaled during showers and baths, as well as the amount ingested and dermally absorbed, was relevant. The RDD measure does not consider these exposure pathways, which could potentially result in bias from exposure misclassification. Using personal exposure factors such as tap water consumption and bathing habits, we constructed a dose model to quantify the relative amount of PCE taken in by each subject, which we refer to as the personal delivered dose (PDD). The dose values calculated by the PDD model were subsequently used to measure the strength of the association between PCE exposure and the risk of breast cancer. The objective was to see if additional information contained in individual survey data affected associations between breast cancer and PCE exposure.The population-based case-control study was designed to evaluate the association between breast cancer and tetrachloroethylene (PCE) exposure from public drinking water . During Female controls were chosen to represent the underlying population that gave rise to the cases. Selection criteria required controls to be permanent residents of the same towns during 1987\u20131993. Controls were frequency matched to cases on age and vital status. Because many of the cases were elderly or deceased, three different sources of controls were used: (1) random digit dialing identified living controls less than 65 years of age; (2) Centers for Medicare and Medicaid Services, formerly the Health Care Financing Administration, identified the living controls 65 years of age or older; and (3) death certificates identified controls who had died from 1987 onward. The resulting 616 controls provide an estimate of the exposure distribution in the underlying population.Subjects or their next-of-kin completed extensive interviews, which provided information on demographics , a 40-year residential history, and potential confounders . Next-of-kin served as proxies for cases and controls who were deceased or too ill to participate in the interview. \"Index years\" were randomly assigned to controls to achieve a distribution similar to that of cases' diagnosis years and only exposures before the diagnosis year (for cases) and index year (for controls) were counted. The analysis considered a range of latent periods: 0, 5, 7, 9, 11, 13, 15, 17, and 19 years. For a detailed description of the methods, see Aschengrau et al. [If individual behavior in water use is an important element in a person's exposure, using the relative delivered dose (RDD) could bias the results. The RDD quantifies the amount of PCE in the drinking water, but does not consider exposure from inhalation, dermal absorption, and ingestion. PCE is a volatile organic compound and daily indoor inhalation exposure to contaminated water from showering can be up to six times greater than exposure from ingestion . To furtNon-proxy cases and controls were interviewed about many of these factors: the number of glasses of tap water consumed per day, including drinks made with tap water, such as coffee or lemonade; the use of bottled water; and the temperature, frequency, and duration of showers and baths. Information on a subject's physical characteristics, such as height and usual weight, was also obtained. Certain model parameters not provided by the questionnaire were obtained from the current scientific literature .We used this information to construct a personal delivered dose (PDD) model that considered three exposure routes: inhalation, dermal absorption, and ingestion. The RDD value was converted into an annual concentration and used as the initial water concentration for the PDD model (mg/L). The amount of PCE contributed by inhalation is a function of the temperature, frequency and duration of baths and showers, and the concentration of PCE in the bathtub/shower stall air. To determine the amount of PCE that volatilized from the water, the two-resistance theory was applied to temperature dependent physical and chemical properties of PCE . The derQuestions regarding tap water use and bathing habits were not asked in proxy interviews so the PDD analysis was restricted to non-proxy subjects ; race; marital status; religion; education level; and physical activity level. None of these additional variables changed the adjusted estimates by more than 10%, and so the final models included only the core confounders. Adjusted analyses were not performed if there were fewer than three exposed cases and three exposed controls in an exposure level [Exposure groups were further categorized for latent periods that ranged from 0 to 19 years. Each exposure level was treated as a binary variable in separate multiple logistic regression models. Odds ratios (ORs) were calculated for each exposure level relative to never-exposed cases (n = 360) and controls (n = 336). The adjusted analysis controlled for a group of core confounders: age at diagnosis or index year, family history of breast cancer, personal history of breast cancer (before current diagnosis or index year), age at first live birth or stillbirth, and occupational exposure to PCE. These factors were chosen as confounders re level . We calcre level .th percentile, greater than the 50th percentile, greater than the 75th percentile, and greater than the 90th percentile. The referent category remained never exposed cases and controls.We then repeated the crude and adjusted analyses using each subject's personal delivered dose (PDD) as an exposure measure. The PDD distributions of the exposed controls were used to define the same four exposure levels: less than or equal to the 50We also conducted a goodness-of-fit analysis to compare the RDD and PDD exposure measures and to determine which model performed better . We compth percentile, and 90th percentile RDD values for the non-proxy exposed controls were similar to the values for the exposed controls among all subjects .th and 75th percentiles at shorter latency periods declare they have no competing interests.VV created the dose model, conducted the statistical analyses, and drafted the manuscript. AA provided the data and assisted in epidemiologic analysis and editing. DO participated in the design of the study and the editing of the manuscript. All authors read and approved the final manuscript.This document describes the dose model in more detail.Click here for fileThis document provides a table of adjusted odds ratios for breast cancer by tetrachloroethylene exposure levels in RDD and PDD analyses.Click here for fileThis document provides a table of deviance measures for logistic regression models by tetrachloroethylene exposure levels in RDD and PDD analyses.Click here for file"} +{"text": "However, these results are limited by the inability to account for several potential confounders. This study demonstrates that spatially distributed covariates may play an important role in individual exposure patterns. Spatial information may enable researchers to detect a potential exposure pattern that may not be revealed with only nonspatial variables.In this study we evaluated residential location as a potential determinant for exposure to organochlorine compounds. We investigated the geographic distribution characteristics of organochlorine levels in approximately 1,374 blood samples collected in 1974 from residents of a community with a potential organochlorine source. Street addresses of Washington County, Maryland, residents were obtained and geocoded in a geographic information system. We used multivariate linear regression models to characterize the blood organochlorine levels of these residents that had been analyzed as part of previous studies using both environmental- and individual-level covariates. This was done to evaluate if the geographic distribution of blood levels in participants was related to the environmental source in the community. Model inference was based on generalized least squares to account for residual spatial variation. A significant inverse relationship was found between blood dieldrin levels and residential distance from the potential source. For every mile of distance from the source, blood dieldrin levels decreased 1.6 ng/g in study participants ( Spatial information has long been used to study the environmental contamination patterns of persistent organochlorine pollutants. These environmental data are often used as surrogates for exposure experienced by the studied community. However, organochlorine levels may also be measured in serum, providing a more accurate account of exposure. Because we can also link spatial information, such as location of residence, to blood donors, spatially evaluating biomarkers of exposure is a logical extension to investigating spatial patterns in environmental media.p-chlorophenyl)ethane (DDT) and dieldrin. Site investigations since the 1970s have indicated the presence and migration of organochlorine pesticides, such as DDT and dieldrin, as well as other toxicants to off-site areas, and the U.S. Environmental Protection Agency (EPA) placed the site on the National Priority List for cleanup in 1997 as a Superfund site [Maryland Department of the Environment In the early 1930s, a large chemical company built a 19-acre facility in the city of Hagerstown in Washington County, Maryland, for the production of fertilizers and formulation of pesticides, including 1,1,1-trichloro-2,2-bis with a spatial correlation structure so as to provide proper estimation of effect standard errors and corresponding tests of significance.n = 1,391) was analyzed for organochlorine compounds to examine the association between concentrations of these compounds and subsequent cancer (p-chlorophenyl)ethylene (DDE), and polychlorinated biphenyls (PCBs). About half of these samples were also assayed for additional organochlorines such as dieldrin. Details concerning the blood collection, storage, and analytical methods have been published elsewhere signed written consent forms to donate blood for research purposes as part of the Campaign Against Cancer and Stroke (CLUE I) in the fall of 1974 . A subset cancer . All samlsewhere . This stWe used 1,391 blood samples assayed for organochlorine concentrations. Blood-sample organochlorine concentrations from four subjects whose blood was assayed twice were averaged for analysis purposes. Thirteen subjects were found to reside outside of Washington County at the time of blood draw and were removed from the sample pool. In addition, one sample had reported DDE values > 2.25 times higher than the next highest reported value, and was therefore considered a reporting error and removed from analysis. Ultimately, a total of 1,374 samples were considered valid for this study.Street addresses and ZIP codes of the study participants were collected as part of the CLUE campaign. ArcGIS 8.2 softwareIn addition to geocoded address information, we obtained information on participant demographics, such as age, race, sex, education level, marital status, and district average socioeconomic status; variables that have been shown to be predictive of blood organochlorine levels, such as smoking status (current smoker at the time of blood draw); and drinking-water source . These dBecause DDT breaks down to DDE in the blood, DDE levels were chosen to represent DDT exposure . Total PBlood samples were nonfasting. All compounds were analyzed both unadjusted for lipid content and lipid-adjusted using the method of Levels of DDE, PCBs, and dieldrin in participants\u2019 blood were mapped using their geocoded coordinates to study the spatial distribution of the levels of these organochlorines and their possible relationship to the Superfund site. Spatial structure in the levels of organochlorines was further explored using estimated semivariograms . SemivarMultivariate linear regression was used to develop models that best describe the blood levels of each organochlorine, both lipid adjusted and unadjusted. These models are of the forms denotes spatial coordinates, Y(s) represents blood organochlorine levels of participants residing at location s, X1(s) . . . Xn(s) are covariates (including possible interactions) indexed by location s, \u03b21 . . . \u03b2n values are their associated effects, and \u03b20 is the baseline intercept. The residual error term \u025b(s) was assumed to be normally distributed with a zero mean and constant variance. To further account for possible residual spatial variation, residuals were allowed to be spatially dependent by parameterizing their correlation as a decreasing function of the distance between their locations. In the geostatistical literature, model 1, with these specifications, is known as a universal kriging model commonly used for spatial prediction at unobserved or unmeasured locations (where ocations .R2) was used to rank model performance. The top-performing portion of models was then investigated further for significant interactions among the included covariates. The final models were chosen based on model parsimony and scientifically meaningful interpretations. All exposure determinants, geographic or not, were considered on an equal setting before developing the regression models.We began to select models for blood levels of each organochlorine by running all possible models derived from each combination of covariates considered as regression main effects as well as investigating univariate relationships between each covariate and the outcome variable. All covariates were checked for colinearity, and those found to be correlated with one another were evaluated separately in the models to determine which were the best predictors. The fraction of variance explained by the model adjusted for the number of explanatory variables , then OLS estimates and corresponding tests of significance can lead to invalid results . We usedThe final models were adjusted for possible residual spatial variation . Semivar0, . . . \u03b2n, with the exponential spatial correlation parameters using maximum likelihood, yielding GLS estimates for covariate effects (g(Y) = (Y\u03b3 \u2212 1)/\u03b3, with Y representing an organochlorine compound and \u03b3 a parameter of the likelihood to be estimated .Demographic information regarding the study population is given in Approximately 96% of the addresses were geocoded successfully. Aside from clustering of residences in accordance with population density, spatial patterns were not apparent. The 50 addresses that were not geocoded, and hence removed from the analysis, consisted mainly of rural routes and post office boxes that the base map was unable to locate . They shFrom the exhaustive search of all covariates, we chose plausible regression models for each organochlorine based on their ability to predict model variability. Spatial dependence was found in the residuals of all organochlorines in this step, as diagnosed by their estimated residual semivariograms. Parameter estimates and tests of significance were adjusted for this residual spatial dependence using the GLS-based approach outlined in \u201cMaterials and Methods.\u201dAge, sex, smoking status, education, drinking water source, and distance to the Superfund site improved the overall fit of the model of blood DDE levels. Women, non-smokers, and city water drinkers had statistically significantly less DDE in their blood than did men, smokers, and those who drink spring or well water, respectively, when all other covariates were controlled. DDE levels also increased significantly with age. No statistically significant association was found between the level of DDE in the blood and distance of the residence from the Superfund site.After adjusting for age, sex, smoking status, education, and drinking water source, a statistically significant negative association was found between dieldrin levels in blood and the residential distance from the Superfund site. The only significant predictors of blood dieldrin levels were smoking status and drinking spring water versus city water. Furthermore, smokers tended to have significantly less dieldrin in their blood than did nonsmokers. Nonetheless, the results of this dieldrin model suggest that those who lived closer to the site had higher levels of dieldrin in their blood than did those who lived farther away. If the trend were assumed to be linear, there would be a 1.6 ng/g decrease in blood dieldrin levels for every mile a residence was located away from the site. Follow-up analysis using half-mile increments for distance suggests that linearity in the effect of distance to the site is supported more at distances near the site. However, the linear relationship appears to be weak because it held true only within the first half-mile increment. When the distance variable was broken into mile increments, a linear relationship was not seen.p < 0.1). Blood levels of total PCBs in participants living within 1.5 miles of the center of Hagerstown were lower than in those living outside of Hagerstown, holding age, sex, education, smoking status, and drinking water source constant. The association was not significant for lipid-unadjusted blood total PCB levels. In addition, while adjusting for other explanatory variables, men, smokers, and well-water drinkers had marginally higher blood PCB levels than did women, nonsmokers, and those who drink city water, respectively. However, only the association with sex was statistically significant. Finally, a positive association with age and a negative association with years of education and blood PCBs were found, although neither of these relationships is statistically significant.No relationships between distance to the Superfund site and blood levels of total PCBs were found . The spap-values ~ 0.05) were sensitive to correcting for spatially dependent residuals. For example, a statistically significant urban/rural residence relationship with lipid-adjusted total PCBs was found in OLS regression but became statistically insignificant after correcting for spatial dependence in the residuals.After correcting for spatially dependent residuals, most model parameter estimates were not changed significantly. However, those covariates that bordered on statistical significance (i.e., In this study we investigated the importance of evaluating spatial covariates and taking into account residual spatial dependence in regression models attempting to explain levels of contaminants in humans. Spatial information is more commonly used in evaluating environmental contamination but is often overlooked in studies modeling the same contaminants in humans, despite the fact that biomarkers are indicators of exposure. Results of this study indicate that models for blood organochlorine levels can benefit by including spatial information.Results suggest that residential location may be a potential exposure determinant of organochlorine levels in human blood as biomarkers of exposure to persistent organochlorine compounds in Washington County, Maryland. A significant association is present between blood dieldrin levels and residential distance from the Superfund site. However, an association between residential location and the Superfund site in the county was not found with blood DDE levels. In fact, DDE levels in blood increased with distance from the site instead of decreasing, as anticipated. One possible reason for this pattern may be that DDE is a widespread compound that can be found in the blood of > 90% of the U.S. population, whereas dieldrin was not as commonly found in the environment and in human blood . In addiFurther research is needed to determine the validity of the association between blood dieldrin levels and the Superfund site. Not only is the statistical significance of this association marginal, but also the model is based on a sample less than half the size of that for DDE. Furthermore, the model found smoking to be negatively associated with blood dieldrin levels. No other studies in the literature have suggested such an association with smoking, and therefore more research into this finding is warranted. Overall, the results are inconclusive as to whether there is a direct relationship between residential distance to the Superfund site and levels of organochlorines in the blood of the participants.For the most part, the covariates found to be associated with blood organochlorines in this study are consistent with the literature. For instance, other studies have found that blood organochlorine concentrations were positively associated with age or current smoking status . Glynn eThe literature regarding certain covariates evaluated in this study is somewhat inconsistent. For instance, we found that men have higher levels of blood organochlorines than women do, perhaps because lactation may lower the organochlorine body burden in women. Some studies have reported similar findings , whereasAn additional limitation is that information on possible confounders or effect modifiers is not complete. For example, the CLUE questionnaire did not obtain height and weight measurements, and therefore BMI could not be considered even though it is a predictor of blood organochlorine levels . Other vHigh residual error and low explained levels of variation in regression models are common when dealing with human populations because of human variability, and they indicate that there is still unexplained uncertainty in these models. Results of this study demonstrate that spatial dependence in these residuals accounts for some of this error. However, residual spatial variation was recognized in all regression models, suggesting that further investigation of spatial information not considered in this study may improve these models. It is therefore important to collect information not only on potential individual-level risk factors but also on all spatial risk factors when designing future studies. Additional potential risk factors that may have been helpful in this study would have included: BMI, occupation, household and occupational exposure to organochlorines, consumption of local and fatty fish, consumption of homegrown vegetables, recreational swimming in local surface waters, land use, and drinking water well location and/or source aquifer.Besides accounting for all potential risk factors, future research in this area would benefit from the addition of environmental exposure models. For example, air dispersion or groundwater modeling results could be coupled with biomarkers in assessing the impact of residing near a potential source. These models would take into account wind and groundwater patterns that have the potential to greatly affect contamination at a specific location. Not enough information on the Superfund site studied here was available for such models to be incorporated into our results. This limitation may greatly affect the results of this study because much of the contamination may have been via groundwater and surface water, thereby obscuring the relationship between the site and residence and introducing exposure measurement error.The study described in this article relies on two assumptions related to participant address information. First, it assumes that participants\u2019 addresses at the time of the blood draw represent their residential location during the time they were most exposed to organochlorines. If there were changes of addresses before blood sampling, and if this exposure measurement error was random, the results may be biased toward the null. It is also possible that they may have had more exposure at their place of employment or recreation than at their residence. Furthermore, we assumed that the locations of the residences were geocoded accurately. However, this assumption is not always valid because there exists positional inaccuracy associated with geocoding using a geographic information system . AlthougIn summary, > 1,200 Superfund sites across the country are contaminated with substances that adversely affect human health , and the"} +{"text": "We previously reported an association between infant wheezing and residence < 100 m from stop-and-go bus and truck traffic. The use of a proximity model, however, may lead to exposure misclassification.Results obtained from a land use regression (LUR) model of exposure to truck and bus traffic are compared with those obtained with a proximity model. The estimates derived from the LUR model were then related to infant wheezing.We derived a marker of diesel combustion\u2014elemental carbon attributable to traffic sources (ECAT)\u2014from ambient monitoring results of particulate matter with aerodynamic diameter < 2.5 \u03bcm. We developed a multiple regression model with ECAT as the outcome variable. Variables included in the model were locations of major roads, bus routes, truck traffic count, and elevation. Model parameter estimates were applied to estimate individual ECAT levels at infants\u2019 homes.3. A LUR model of exposure with a coefficient of determination (R2) of 0.75 was applied to infants\u2019 homes. The mean (\u00b1 SD) ambient exposure of ECAT for infants previously categorized as unexposed, exposed to stop-and-go traffic, or exposed to moving traffic was 0.32 \u00b1 0.06, 0.42 \u00b1 0.14, and 0.49 \u00b1 0.14 \u03bcg/m3, respectively. Levels of ECAT from 0.30 to 0.90 \u03bcg/m3 were significantly associated with infant wheezing.The levels of estimated ECAT at the monitoring stations ranged from 0.20 to 1.02 \u03bcg/mThe LUR model resulted in a range of ECAT individually derived for all infants\u2019 homes that may reduce the exposure misclassification that can arise from a proximity model. An assu2.5) . The advStudies have shown a correlation between air pollution and allergic and respiratory diseases . In contWe have reported that infants in the Cincinnati Childhood Allergy and Air Pollution Study (CCAAPS) whose residences were within 100 m of stop-and-go bus and truck traffic were at an increased risk for wheezing before 1 year of age compared with infants unexposed to truck and bus traffic . This asCCAAPS is an ongoing prospective birth cohort study. The study\u2019s purpose is to determine whether infants who are exposed to DEP are at an increased risk for developing atopy and allergic respiratory diseases and to determine whether this effect is modified in a genetically at-risk population. The study methods, population, and sampling methodology are described in detail elsewhere . BrieflyA proximity model of exposure was previously developed and applied to the CCAAPS cohort . BrieflyThe CCAAPS ambient air sampling network contains 24 sites selected2.5 samples were collected on 37-mm Teflon membrane filters and 37-mm quartz filters with Harvard-type Impactors . Standardized operating procedures for the filter media preparation, gravimetric operations, and sampling were followed (2.5 mass concentrations (bs) of the aerosol-loaded Teflon filters was calculated according to International Standard 9835 within a 400-m buffer, length of bus routes within a 100-m buffer, total number of trucks within a 400-m buffer per day, distance to the nearest major road, distance to the nearest bus route, and land use. Elevation data were obtained using a 7.5-min digital elevation model (DEM) producing 30-by 30-m cells (-m cells . Traffic-m cells and summ-m cells . Land usR2) and calculation of Akaike\u2019s information criterion . We examined the association between ECAT exposure and wheezing without a cold using conditional logistic regression. Adjustment is made for maternal smoking , child care attendance, sex, race (white/minority), breast-feeding , pet ownership, and report of visible mold in the home by a study inspector.We compared the findings from the proximity model categories of exposure and the estimates of ECAT levels derived from the LUR model. We computed histograms of ECAT levels for infants previously designated as unexposed, exposed to stop-and-go traffic, or exposed to moving traffic using S-Plus . We compared the mean and median levels of ECAT derived for each previous exposure category using n = 50) were reported to have recrurrent wheezing without a cold (n = 347), exposed to stop-and-go truck and bus traffic (n = 99), or exposed to moving truck and bus traffic (n = 176).In total, 622 infants fulfilled the eligibility requirements at average age 7.5 months (\u00b1 2.4 months), and 8.0% were selected for inclusion in the final multiple linear regression model and moving traffic (0.44 \u03bcg/m3) were each significantly higher (p < 0.05) than the median value of ECAT for infants categorized as unexposed (0.30 \u03bcg/m3).After the development of the LUR model, 100- and 400-m buffers were created surrounding each infant\u2019s residence. We calculated the elevation, average daily truck count on major roads within 400 m, and the length of bus routes within 100 m for each infant\u2019s residence. An estimate of the level of ECAT at each infant\u2019s residence was derived using the parameter estimates obtained from the LUR model. ECAT was derived only for the infant\u2019s residence because only 14.4% of infants were reported to attend child care. Summary statistics for the sampled ECAT levels and estimated ECAT levels are presented in ectively . Both th3 and at 0.9 \u03bcg/m3 it increases to > 4-fold .AORs for the association between the LUR-derived ECAT and wheezing without a cold are presented in 3 and was R2) and the geographic variables used was lower than the mean values of ECAT among infants previously designated as exposed to stop-and-go traffic (0.42 \u03bcg/m3) and moving traffic (0.49 \u03bcg/m3). This difference supports confidence in the model. The similar ranges of ECAT values among infants previously designated as exposed to stop-and-go and moving traffic support the hypothesis that actual exposure to DEP depends not only on the amount of traffic but also on the elevation of the residence in relation to traffic, the intensity or number of trucks, and the proximity to particular types of traffic (i.e. bus routes), which also represents considerable stop-and-go movement.The mean value of ECAT among infants previously designated as unexposed . This rereported may be aThe issue remains, however, that moving traffic was not associated with wheezing, although the mean value of ECAT was higher in the moving category than in the stop-and-go category. This discrepancy may be explained by the distribution of infant residences within LUR buffers, LUR model parameters, or the housing stock in the moving and stop-and-go categories. Although 100- and 400-m buffers were used to determine exposure in the proximity models and served as buffers to represent exposures in the LUR model, the median distances to the nearest stop-and-go and moving traffic are 43 and 252 m, respectively. In addition, elevation, the most significant parameter in the LUR model, varies according to the previous exposure categories. The mean (\u00b1 SD) elevation for infants previously categorized as unexposed, exposed to stop-and-go, and exposed to moving traffic are 812.8 \u00b1 84.6, 767.9 \u00b1 123.2, and 741.6 \u00b1 123.8 m, respectively. The housing stock also may vary significantly among infants categorized as exposed to stop-and-go and moving traffic. We previously reported that infants categorized as exposed to stop-and-go traffic are more likely to have parents with an annual household income < $40,000 than infants exposed to moving traffic . The hou2.5 of diesel and gasoline combustion and better characterize the contribution of diesel exhaust particles with the average daily car count. In addition, the residential address used to assess traffic exposure in both models was derived from the address reported by the parent at the time of the initial questionnaire administration. It is possible, therefore, that a small number of infants may have changed residential locations between the time of parental interview (infant age 7.5 \u00b1 2.4 months) and the infant\u2019s first examination (age 13.6 \u00b1 2.6 months). As the cohort ages, residential locations will be assessed and exposure estimates derived based on changes in location.A possible limitation of this study, however, is the specificity of ECAT as a marker of truck traffic and/or diesel emissions. Our marker, ECAT, is more robust than ambient EC as a marker of DEP because it is the calculated EC fraction from all traffic sources. This portion of EC averaged over various seasons at each sampling site has resulted in levels of ECAT lower than those reported for ambient EC sampling alone in an urban area . Nevertharticles . FurtherIn conclusion, we have applied a land-use regression model to estimate infants\u2019 exposure to ECAT and have compared the resulting estimates to exposures determined by a proximity model. Infants previously categorized as unexposed had the lowest mean value of ECAT when compared with infants previously categorized as exposed to either moving or stop-and-go traffic. The range of ECAT estimates within formerly designated discrete exposure categories, however, demonstrated one limitation of a proximity model. We have also demonstrated an association between ECAT exposure and wheezing. Wheezing during infancy was based on parental report, however, and may not be predictive of future development of asthma. The CCAAPS cohort will be followed and evaluated throughout childhood with more objective measures of airway inflammation and/or obstruction. The finding of a potential association between ECAT and respiratory health effects may lead to public health interventions including vehicle emission standards and determining appropriate distances from major roadways for building homes and schools. In summary, we have concluded that an infant\u2019s geographic location within an urban area may highly influence the level of air pollutant exposure and resulting health effects and may be particularly potent in susceptible infant populations."} +{"text": "Four different tracers with differing physicochemical characteristics have been evaluated to assess their suitability as models for drug delivery.Scintigraphic studies have been performed to assess the release, both 99mTc-DTPA) and (99mTc-MDP), and two lipophilic tracers, (99mTc-ECD) and (99mTc-MIBI), were used as drug models.In-vitro disintegration and dissolution studies have been performed at pH 1, 4 and 7. In-vivo studies have been performed by scintigraphic imaging in healthy volunteers. Two hydrophilic tracers, (In vitro dissolution velocity constants indicated a probable retention of the radiotracer in the formulation. In vivo disintegration velocity constants showed important variability for each radiopharmaceutical. Pearson statistical test showed no correlation between in vitro drug release, and in vivo behaviour, for 99mTc-DTPA, 99mTc-ECD and 99mTc-MIBI. High correlation coefficients were found for 99mTc-MDP not only for in vitro dissolution and disintegration studies but also for in vivo scintigraphic studies.Dissolution and disintegration profiles, differed depending on the drug model chosen. Scintigraphic studies have made a significant contribution to the development of drug delivery systems. It is essential, however, to choose the appropriate radiotracers as models of drug behaviour. This study has demonstrated significant differences in release patterns, depending on the model chosen. It is likely that each formulation would require the development of a specific model, rather than being able to use a generic drug model on the basis of its physicochemical characteristics. In vitro studies can be very expensive but costs are even higher when in vivo stages are reached. Methodology that can generate relevant information but shorten the preformulation phases means important savings in economic, human and time terms = kdt / 2.303log [1- QWheret the amount of activity of the tracer disintegrated at time t present in the region of interest.Q\u221e the maximum amount of activity measured in the region of interest.Qt the amount of activity of the tracer disintegrated at time t present in the region of interest and Q\u221e the total amount of activity of the region at the end of the study [Disintegration velocity constants in the stomach and the constant of appearance in small intestine were determined in a similar way, considering Qhe study .Radiotracers were incorporated during the wet granulation process of tablet preparation and appropriate controls of quality were applied as previously described . All thein vivo scintigraphic studies, specifically in stomach and small intestine for the different radiopharmaceuticals evaluated as tracers.Disintegration velocity constants in the gastrointestinal tract were determined by Dissolution velocity constants for tablets containing this lipophilic tracer showed no significant variation across the whole pH range studied and the values were very similar to disintegration velocity constants determined under the same conditions Table .r is always between -1 and 1 indicating that the points are near a negative or a positive slope respectively. The closer the r value is to -1 or 1 the higher the correlation of the series is. [The Pearson statistical test was used to quantify correlation within both series of data. Pearson's correlation coefficient ries is. r values) were compared using this test. Correlation coefficients are shown in Table Dissolution and disintegration velocity constants and it was low at pH 4 .When Pearson correlation was quantified between es Table as indicin vitro \u2013 in vivo disintegration velocity constants , it is unlikely that they are responsible for the observed differences in the disintegration constants. It is possible that the tracers are bound to different components within the tablet, which disperse at different rates, which would be a likely explanation. Although the dissolution profiles might be expected to differ depending on the model drug used, the same would not be expected of the disintegration profile. This suggests that the measured disintegration profile can depend on the model chosen.As scintigraphic studies give information of a physical process, tablets containing the same components, but different tracers, might be expected to have a similar behavior in all cases. As tracers are present in low concentrations declare that they have no competing interests.Mrs. Mariella Ter\u00e1n carried out all the experimental work and together with Dr. Eduardo Savio made substantial contributions to conception, design, analysis and data interpretation. They also gave their final approval of the version to be published. Mrs. Andrea Paolino contributed to data acquisition, analysis and interpretation. Dr. Malcolm Frier was involved in revising it critically for important intellectual content.The pre-publication history for this paper can be accessed here:"} +{"text": "Regression calibration as a method for handling measurement error is becoming increasingly well-known and used in epidemiologic research. However, the standard version of the method is not appropriate for exposure analyzed on a categorical (e.g. quintile) scale, an approach commonly used in epidemiologic studies. A tempting solution could then be to use the predicted continuous exposure obtained through the regression calibration method and treat it as an approximation to the true exposure, that is, include the categorized calibrated exposure in the main regression analysis.We use semi-analytical calculations and simulations to evaluate the performance of the proposed approach compared to the naive approach of not correcting for measurement error, in situations where analyses are performed on quintile scale and when incorporating the original scale into the categorical variables, respectively. We also present analyses of real data, containing measures of folate intake and depression, from the Norwegian Women and Cancer study (NOWAC).In cases where extra information is available through replicated measurements and not validation data, regression calibration does not maintain important qualities of the true exposure distribution, thus estimates of variance and percentiles can be severely biased. We show that the outlined approach maintains much, in some cases all, of the misclassification found in the observed exposure. For that reason, regression analysis with the corrected variable included on a categorical scale is still biased. In some cases the corrected estimates are analytically equal to those obtained by the naive approach. Regression calibration is however vastly superior to the naive method when applying the medians of each category in the analysis.Regression calibration in its most well-known form is not appropriate for measurement error correction when the exposure is analyzed on a percentile scale. Relating back to the original scale of the exposure solves the problem. The conclusion regards all regression models. Similar transforms apply to other regression models.If the response variable RC corrected estimates, these will be underestimated by ordinary methods as they do not take into account the variance in the estimation of X. Since the computation of explicit formulas for the standard error is quite tedious [With respect to standard errors for the tedious , standar tedious ,20.In a situation without additional covariates, Equation (1) simplifies considerably. We can write is a modified version of the reliability ratio, usually defined as . In the following we look first to the situation where all individuals are measured the same number of times, in which case we obtain analytical results for all models A-C. When we allow the number of replicates to vary, we must rely on semi-analytical methods to make inferences.where the factor ki = k), we find that the RC predictor given in Equation (6) is simply a linear transformation of the naive predictor . This transformation represents in essence a weighting between the estimated sample mean and the individual means for each data point. Given a certain error ; when k is large and thus relatively close to 1, relatively large confidence is put on the individual means and little correction is made. On the other hand, when k is small, all data points are adjusted closer to the sample mean. In both cases the adjustment is the same for all subjects, resulting in a distribution that is squeezed towards the estimated sample mean, as compared to the distribution of measured values.When all individuals are measured an equal number of times ( is given byThe variance of Var (X) whenever \u03c3U > 0, that is, when there is measurement error. Notice also that when k \u2192 \u221e, Var \u2192 Var (X); that is, if we were to have infinitely many replications, we would be able to estimate Var (X) without bias, using the observed values.which is greater than is given byFurthermore, the variance of Var = Var (\u03bb') = \u03bb'2Var = \u03bb'Var (X). underestimates the variance of the exposure, in contrast to the variance of , which overestimates it.Thus, generally, the variance of \u03c3XY/), and even though the covariance between the corrected exposure and the response underestimates \u03c3XY due to measurement error, this is counteracted by the decreased variance of , resulting in unbiased effect estimates. Using the observed exposure, we get a so-called attenuated effect estimate, which is underestimating the true effect by a factor \u03bb' [Relating this adjusted continuous exposure to a response in a regression analysis results in larger effect estimates as compared to the ones obtained using the measured exposure. For example, in linear regression the effect is decided by the ratio of the covariance of exposure and response to the variance of the response and U ~ N , and we have k replicates, then ~ N and ~ N . Hence, for any percentile point q we have that and . Hence, for variables consisting of median points in quintile groups we have that Var (med) = Var (Xmed)/\u03bb' and Var (med) = \u03bb' Var (Xmed).We illustrate this using linear regression. If Y , Cov = Cov . Thus, the covariance between the response and the variable given by medians in quintile groups of the naive exposure isRegarding the covariances, we have that given that the error in the exposure is independent of the response Cov = Cov .med and Y equals the correlation between med and Y, we find that the covariance between med and Y isFurthermore, using that the correlation between Cov = \u03bb' Cov ,Cov /Var (med) = Cov /Var (Xmed), the regression calibrated effect estimate is asymptotically correct. The naive estimates are on the other hand attenuated by the same factor \u039b as when analyzing the exposure on continuous scale.Hence, since in this case confusion effect, in that some data points are adjusted to a larger extent than others. However, the main effect of the transformation is the mentioned adjustment towards the sample mean. At least, we propose that classification of the corrected predictor according to quintiles leads to much the same classification pattern as classification of the naive predictor .When the number of replicates varies between individuals, we have in addition a kind of Xc and c, Xc and c, and c and c, respectively. We used X ~ N and U ~ N , and the number of replications was either 5 or 1. The total number of individuals was n = 100000, divided in various ways between the two replication groups. As can be seen from the table, most of the individuals were classified equally for the naive and the regression calibrated predictors. The exact figures vary depending on the replication pattern and which group the individuals belong to, the replicated or the nonreplicated ones, and finally which of these groups is larger and thus dominant in deciding the spread in the distribution of .To uphold the previous proposal, Table X), are very similar for the naive and the corrected predictors. Hence, categorizing using the corrected exposure still retains misclassification, and the magnitude of this is very similar to the misclassification obtained with the naive approach. Hence, the estimates relating to categorical exposure in models A and B, will be very similar for the naive and the RC approach. However, in model C, regression calibration still benefits from the mentioned squeezing of values towards the mean.At the same time, we see that the percentages of cases that are correctly classified . These cases correspond to true mean differences \u03b14 of 1.96 and 0.56 between the extreme quintiles in model A (Equation (3)), naive trends \u03b31 of 0.47 and 0.13 ), and effects \u03c81 of 0.76 and 0.22 using medians in groups as explanatory variables ).We studied cases where the correlation \u03bb: 0.2 (which corresponds to a rather large measurement error), 0.5, and 0.8 (modest measurement error situation). Standard errors for the corrected effect estimates are obtained via resampling pairs bootstrapping with 200 bootstrap samples [Results were produced for three levels of the reliability ratio ki = k = 2. Next, we looked at situations in which a random 20% subset of the individuals are measured 5 times, while the rest only had 1 measurement . When \u03bb = 0.5, the attenuation factor for these models was just above 0.8. Hence, the effect estimates differ considerably from the true effects in many cases. Moreover, a decrease in the reliability ratio is associated with increased bias, as was to be expected.We see that in situations with a constant number of replicates, regression calibration estimates are equal to the ones obtained from the naive approach, unless the original scale of measurement is somehow incorporated. None of the methods performed very poorly as long as the measurement error was not too large, however the effects were attenuated by a factor of almost 0.6 in both models A and B in the most severe measurement error situation studied (\u03bb = 0.2) indicates effects that are about 1/3 of the true effects.Using the median values in model C, we see that the regression calibration approach gives unbiased effect estimates. This is in contrast to the naive approach, which in the most severe cases (RC approach (1.23 vs. 1.20), explaining this apparent inconsistency. Notice also that the results are generally worse with this replication pattern than when all individuals were measured twice.When the number of replicates varies, we see again that the regression calibration fails to improve significantly the effect estimates relative to the naive approach, except for with model C. In these results we see some small, though not substantial, differences between the two approaches for models A and B, due to the confusion effect mentioned previously. We also see that, in contrast to what could be expected from Table X in the analysis will probably give RC an advantage relative to the naive approach, especially when the correlation is strong.Regression calibration uses the information of covariates in the correction procedure, see Equation (1). Thus, including a variable correlated to Z, measured without error. The effect of Z is set to be equal to the effect of X, and the correlation \u03c1XZ between X and Z is either 0.2 or 0.7. Otherwise the situations are the same as in the previous examples, although we confine to situations with constant number of replicates (k = 2). The results are shown in Tables \u03c1XZ = 0.2) and \u03c1XZ = 0.7).We study the performance of regression calibration in the presence of a standard normal covariate Z, the true effects that we are trying to estimate are somewhat smaller than when X is the only independent variable in the models. Nevertheless, we see that when the correlation between X and Z is small .Due to the introduction of ll Table , the patZ, we see that both methods are quite good, though while the RC approach gives unbiased estimates, the naive approach tends to overestimate as the measurement error increases. This is a well-known effect for covariates positively correlated to error-prone explanatory variables.Regarding the effects estimates for the covariate X and Z is stronger . So, the high correlation leads to more bias in the naive effect estimates, but it also means that the covariate Z contains much information about the true exposure X, enabling the RC approach to counteract parts of the bias.When the correlation between er Table , the difZ, as observed in Table RC estimates are also affected.Furthermore, while for the continuous case the regression calibration approach still manages to produce unbiased estimates, we see that for model C there are some deviations for large measurement errors. We also see that the tendency of the naive approach to overestimate the effects of Wij = estimated folate intake through food (in \u03bcg/MJ) for individual i in FFQ j, and Yi = self-reported depression (yes/no) for individual i, where i = 1, ..., 898, j =1, 2. The prevalence of depression in the sample was 19.7%.To illustrate our results, we use data on non supplemental folate intake, total energy intake and self-reported depression from the Norwegian Women and Cancer (NOWAC) cohort study started in 1991 . The datOR) was estimated as 0.70 (SE = 0.13) for each 10 \u03bcg/MJ increase in folate intake, while the regression calibration approach gave = 0.62 (bootstrapped SE = 0.16). Looking at the effect of going from the first to the last quintile (model A), we found = 0.57, with standard errors 0.15, for both approaches. The simple trend (model B) was estimated to 0.87 (SEs 0.05) for both approaches. Applying the median values in model C, the naive effect estimate was = 0.61 (SE = 0.13) for each 10 \u03bcg/MJ increase in folate intake, while the corrected estimate was 0.52 (SE = 0.15).The folate intake, adjusted for total energy intake, was related to self-reported depression using logistic regression modelling. Using the continuous exposure, the naive odds ratio with single measurements of folate intake. Including the total group in the analysis, we got the following results: Using the continuous exposure, the naive odds ratio was 0.84 (SE = 0.03) for each 10 \u03bcg/MJ increase in folate intake, while the regression calibration approach gave = 0.75 (SE = 0.05). Under model A, we found = 0.71 (SE = 0.04) for both approaches, and the simple trend (model B) was estimated to 0.92 with standard error 0.01, again for both approaches. Applying the median values in model C, the naive effect estimate was = 0.78 (SE = 0.03) for each 10 \u03bcg/MJ increase in folate intake, while the corrected estimate was 0.67 (SE = 0.05).The 898 individuals included in the replication study were sampled from a larger group into underweight (< 18.5), normal weight (18.5 to 24.9), overweight (25 to 29.9), and obese (\u2265 30). A small simulation study was conducted to explore whether the current results sustain when such fixed cut-points are applied, and it seems We have focused on a situation with replicates. However, as outlined in the Introduction, other sources of information regarding the measurement error could be either internal or external validation studies or instrumental variables. The approach studied in this paper would still amount to fitting a regression model for the true given the measured exposure, and including the predicted exposure from this model in the main analysis. Furthermore, the percentiles would be predicted by the same model, so naive and corrected categorized exposure are the same in these situations as well.In some cases it might not be appropriate to use the original scale in the analysis, the researcher might specifically wish to relate to the categorical variables. In our view, there are two possible approaches to obtain efficient effect estimates in these cases. Either a) some information is needed about misclassification probabilities or b) a better way is needed to categorize from the original continuous measurements.X) but could if we had validation data. For example, Rosner [et al. [We cannot achieve a) using just replicate measures one can try to estimate the underlying distribution of f X but theInstead of going via the expected values of the continuous exposure, we could find directly the expected categorical exposure. We expect that analysis with expected conditional probabilities (given the observed exposure) of the categories will give better results than the analysis with dummy variables. The latter amounts to adjusting the probability of the most probable category to 1 and all the other probabilities to 0, thereby disregarding the information that lies in the uncertainty of the categorization.Future work should aim to develop suitable and functional correction procedures in analyses where the exposure variable is categorized according to percentiles, and investigations should be carried out in order to decide which method is the best or most suitable for recommendations to include in routine analysis.The author(s) declare that they have no competing interests.ID was responsible for most of the study design, analysis and writing. JPB, PL and MT helped with the conceptualization and writing of the article, AH did the data preparation."} +{"text": "A commentary by Authors of these articles include Aaron Blair, the chief of the Occupational Studies Section of the National Cancer Institute (NCI), who stated that epidemiologic evidence shows a strong exposure\u2013response relationship for angiosarcoma of the liver, but not for other types of cancer . In a moOccupational vinyl chloride exposure has not been conclusively causally linked to any adverse health outcome, with the exception of angiosarcoma of the liver.Even more recently, The aggregate data are reassuring in excluding any excess risk of death from lung, laryngeal, soft tissue sarcoma, brain and lymphoid neoplasms, as well as cirrhosis.Recently published updates of cancer incidence in European and American industry-wide cohorts of workers exposed to vinyl chloride provide a firm basis for the conclusion that vinyl chloride exposure is not causally associated with brain cancer and the other tumors mentioned by Given the strength and uniformity of the evidence supporting the U.S. EPA\u2019s position, it is striking that Finally, it is not accurate that industry unduly influenced the review process for vinyl chloride nor that the potency factors published in the IRIS (Integrated Risk Information System) database are insu"} +{"text": "A cross-sectional observational study with repeated observations was conducted on 16 Danish dairy farms to quantify the influence of observer, parity, time (stage in lactation) and farm on variables routinely selected for inclusion in clinical protocols, thereby to enable a more valid comparison of udder health between different herds. During 12 months, participating herds were visited 5 times by project technicians, who examined 20 cows and scored the selected clinical variables. The estimates of effect on variables were derived from a random regression model procedure. Statistical analyses revealed that, although estimates for occurrence of several the variables, e.g. degree of oedema, varied significantly between observers, the effects on many of these estimates were similar in size. Almost all estimates for occurrences of variables were significantly affected either parity and lactation stage, or by both e.g. udder tissue consistency. Some variables, e.g. mange, had high estimates for the farm component, and others e.g. teat skin quality had a high individual component. Several of the variables, e.g. wounds on warts, had a high residual component indicating that a there still was a major part of the variation in data, which was unexplained. It was concluded that most of the variables were relevant for implementation in herd health management, but that adjustments need to be made to improve reliability. Mastitis control is a major part of dairy herd management. Important components hereof are the daily decisions regarding type of treatment, drying off of affected quarters and culling and replacement of cows. Many of these decisions are based on the dialogue and interaction with the local veterinarian. Approximately 40% of Danish dairy farmers have contracted their local veterinarian to visit the farm on a monthly basis . At thesInformation like diagnoses at treatment, somatic cell counts (SCC) and results of bacteriological culturing of milk samples from cows high SCC or clinical mastitis are routinely collected in most herds. Due to farm specific factors like differences in farmers' attitudes to disease and recoStudies have been carried out to find additional health measures, that allow the farmers and veterinarians to directly follow the development of udder health in the herd ,7. TheseA Danish pilot study conducted on 4 farms with the aim of developing a clinical protocol for udder examination, indicated a strong relationship between selected clinical udder health measures and milk production values , but sugThe study, set up as a cross-sectional observational study with 5 repeated observations (visits per farm), was executed from January to December 2000Sixteen Danish dairy herds were selected to represent a broad spectrum of herds within a group of 120 herds enrolled in the project 'Kongeaa Projektet' run by the Danish Dairy Board .The key characteristics of the participating herds are presented in table The selected cows were random samples of the lactating cows in the participating herds. In the loose housing systems, the examined cows were positioned at pre-selected places (e.g. second and fifth cow on the left side) in milking parlours. In the tie-stall systems, the examined cows were positioned as every third or fourth cow from a randomly pre-selected starting point in the stable (e.g. fifth cow from the door).The selected cows were examined by means of visual inspection and palpation of the udder immediately after one of the twice-daily routine milkings.All examinations were carried out by project technicians experienced with this type of examination, 2 of whom had previously participated in a similar study. In order to calibrate measurements, 2 joint training sessions were organized for all observers before the commencement of the study period, and clinical data collection forms had illustrations of teat and udder shapes printed on the reverse, together with details describing the individual variable categories.In table Proc Mixed in the SAS Analysis System . The following base-line model was applied:Each udder variable was analysed using ijk = \u03b20 + \u03b30k + \u03bc0jk + \u03b21jkDIMijk + \u03b22OBSjk + \u03b23PARjk + \u03b24DIMijk*OBSjk + \u03b25PARjk*DIMijk + \u03b26OBSjk*PARjk + \u03b27DIMijk*OBSjk*PARjk + \u03b28jkDIMijk2 + \u03b29jkDIMijk3 + \u03b210jkDIMijk4 + \u03b5oijkOutcomeijk is the response (e.g.) of the i-th DIM for the j-th cow in the k-th herd. \u03b20 represents average (expected) response, say clinical score, at time = 0 (fixed effect or the intercept).Where Outcome0k represents the departure of the k-th herd from the overall mean response (\u03b20). That is, the distribution of herd-effects. This (random) variable allows each herd to have a distinct departure from the average response at Time = 0; a so-called herd-effect. It is assumed to be normally distributed with zero mean.\u03b30jk represents the departure of the j-th cow from the mean response (\u03b20) within herd. That is, the distribution of cow-effects. This (random) variable allows each cow to have a distinct departure from the average herd-level response at Time = 0; a so-called cow-effect. It is assumed to be normally distributed with zero mean.\u03bc1jkDIMijk represents average (expected) change in response associated with each unit of change in DIM. This is the regression coefficient or fixed effect of DIM (the average slope).\u03b22OBSjk represents the average (fixed) effect of observer . That is, an estimate of the difference between observers at DIM = 0.\u03b23PARjk represents the average (fixed) effect of parity . That is, an estimate of the difference between parities at DIM = 0.\u03b2The various crossed effects represent the average (fixed) effects of the interactions between the fixed effectsoijk represents the residual variance of the individual measurements. That is, an estimate of the random variability associated with the individual measurements, when the fixed effects and random (cow) effects were accounted for. This (random) variable is also assumed to be normal distributed with zero mean.\u03b5In case of binary response variables a logistic regression model was used. In that case the residual term was binomially distributed. This model operates with the same baseline as the random regression model.The general modelling strategy was to specify the most complicated model initially and subsequently eliminate statistically non-significant terms. Statistical significance was judged by calculating the difference in -2LogLikelihod values of models using the maximum likelihood function (ML) with and without the factor. Under the null-hypothesis of no effect of the eliminated term this difference follows a chi-square distribution with degrees of freedom equal to the difference in number of parameters in the contrasted models. This test is a so-called likelihood ratio test.Those variables, which had very few observations in the categories or for which the distribution of the residuals were not normally distributed, were re-grouped to become binary variables and analysed with the Glimmix macro. The transformed variables were: soiling teats (none vs. slight/more), claw length and warts . Additionally udder and teat shape recordings were transformed into dummy variables.The dichotomous (present vs. not present) outcomes limit recording within animal, therefore the cow component cannot be estimated for these variables if there is no effect of lactation stage. Thus, the estimates from these models must thus be interpreted as results in a cross-sectional study i.e. a chance of observing a given characteristic in an observed cow.The variance components of farm and individual cow were calculated by the latent variable approach described by Dohoo .The results of the type 3 F-tests and the analyses of the variance components of farm and individual cow are presented in table n indicates that this is the last of the polynomials of DIM to be significant, all polynomials up to this link are included in the final model.DIMAs appears from table In the following section, the size of effect of observer, lactation stage, parity, farm and cow on the estimates of the significantly affected variables will be presented.The observers made statistically significantly different observations regarding frequency of overgrown claws (estimates varied between 40\u201360%), chorioptic mange (estimates varied between 0\u201380%), oedema , degree of teat skin quality , long udder (estimates varied from 0\u201350%), occurrence of wounds on teats and on warts (estimates varied from 5\u201330% for both) and occurrence of warts on teat end (estimates varied from 10\u201336%). For other variables i.e. soiled hind legs and teats (estimates varied from 5\u20139%), hock callus , udder consistency , and teat end callus , predicted estimates did not seem to vary greatly between observers, although the statistical analysis revealed significant differences. An example of the observable magnitude of these differences is demonstrated in figure It can be seen in figure The degree of soiling of the udder decreased 0.25 score values respectively, over the course of lactation. Long udder shape was the only udder shape to be affected by lactation stage and not parity. The prevalence of cows with this udder shape fell during the course of lactation from 50% to 0%. Likewise, the prevalence of cows with udder oedema or udder inflammation fell to near 0% for both variables, though the prevalence fell more sharply for oedema. Although the effect of lactation stage on teat skin quality was significant, the change in the estimate for the score value was very small figure .Figure st lactation cows. Regarding the effect of parity on the expected prevalence of the udder shapes, the 'goat' udder was more prevalent in older cows, whereas the small udder shape was much more likely for young cows (24% vs. less than 5% for older cows). Cows with 'other teat shape' (mostly long) were more likely to be second or higher parity cows though there was little difference in the effect on the estimates.A higher percentage of older cows were seen to have overgrown claws and mange compared to 1Soiling of hind quarters was affected by both lactation stage and parity and decreased approximately 0.5 score values (the predicted score varied between observers) parallel for the three parity groups (no interaction). The percentage of cows with udders between hind legs (withdrawn udders) decreased for second and higher parity cows, but increased for first parity cows.Figure As shown in figure Figure The likelihood of finding third or higher parity cows with nodes was higher than for the younger cows and increased over the course of lactation; the effect on estimate was small (less than 1% difference). The only teat shape, which was significantly affected by both lactation stage and parity was short teat shape, in that the prevalence of first parity cows with short teats fell from 15% to 5% during lactation and the prevalence of older cows with short teats remained low throughout the lactation period for the other two parity groups.Figure Presence of mange, distinct palpable nodes in the udder, nodular tissue, long and short teats and scar tissue in the teat canal were highly affected by farm. In contrast to this soiling of hind part, udder consistency, teat skin quality and teat canal extraction were affected more by individual cow effects than by farm effect. The only variable to have an equal farm/animal component was callus size.The residual value was high for some variables, indicating that the major part of the variation between observations remained unexplained. These variables were asymmetry of the udder (front and hind quarters), long and 'goat' udders, signs of clinical mastitis and wounds on warts.The observers examined cows in different herds. The possibility that there are systematic differences between herds cannot be completely excluded despite herds being randomly allocated to the observers. However, it was assumed that the variation between cows and herds examined by the same observer was not significantly different from the variation between herds examined by different observers. The seasonal effect on measurement was not covered as a separate part of the analysis and is therefore included in the observer effect.The results indicate that differences between observers were not eliminated when dealing with variables like soiling of teats, wounds on teats, teat skin quality or udder consistency despite training. This is this unexpected since all observers had had joint training sessions, and the chosen variables express things in a relatively clear way.Soiling of teats and wounds on teats are examples of differences between observers, where different observers do not follow same pattern. The animals were examined immediately after milking, and therefore, soiling of the teats should not be expected. Since soiling of teats is critical, especially when found immediately after milking, some observers may have been more critical to this and noted very tiny specks of dirt, whereas others have not. In discussions among the observers after the study, disagreement regarding the judgement of necrosis vs. wounds was revealed. This may have been the reason for the recorded differences. More strict definitions and photo references with the categories indicated may be helpful in the classification of variables like soiling and wounds. Neijenhuis et al. found goFor some of the variables e.g. teat skin quality and udder consistency observers did see similar patterns in prevalence of variables, although there were differences in values. Observer variation has previously been described regarding teat-skin quality assessment. For example Rasmussen and co-authors showed tSoiling of legs and udder was found to be affected by lactation stage, whereas soiling od teats was not. This seems plausible. Cows are often transferred from a clean calving box to the milking stable where the environment may be more contaminated with faeces. Additionally, early in lactation cows may be fed a higher percentage of concentrate to meet high energy demands and this causes faeces to become less viscous . In contThe udder shapes defined as 'long' and withdrawn udder were all affected by lactation stage. The reason for the decrease seen in the prevalence of these udder shapes during the course of the lactation could very well be that the udder becomes less swollen during the course of the lactation period and therefore relaxes to drop further down. It is well documented that the high levels of cortisol measured immediately after calving often induce oedema and as tThe results of the study indicated that prevalence of cows with inflammation of the udder decreases over the course of the lactation period. This effect of lactation stage on the occurrence of udder inflammation is well documented. At the start of lactation udder infections have been found to be present at significantly higher levels than in mid- and late lactation .The occurrence of warts on teats rises significantly during the lactation. No evidence based on clinical examinations of cows exists to document that warts on teats should spread between cows in the milking stable. Bovine papilloma, which may cause warts on teats, is known as very contagious, and the results of this study can be viewed as a quantification of this contagiousness.There is a clear increase in the prevalence of older cows with mange infestation. This is an indication of spread of the parasite after introduction to the milking stable. Animals do not seem to rid themselves of this infestation once infested, and as these infested older cows are reintroduced after calving, as they often are, the uninfested first lactation cows pick up the infestation after introduction to the milking herd. The results thus indicate that the prevalence of infested animals will often be linked to the make up of the herd regarding parity distribution.The fact that cows with asymmetric udders (front vs. hind quarters) are more likely to be third parity or older cows is not surprising. Often this type of asymmetry is caused by the wear of the milking machine or the fact that quarters have been dried off after a case of a case of mastitis . SimilarThe combined effect of lactation stage on the prevalence of asymmetric udders rises for younger cows but falls for older cows. This may partly be due to the fact, as discussed above, that at the start of lactation, udder tissue will be more voluminous and therefore the difference between the glands will be more pronounced. However, the reason for the decrease seen over the course of the lactation in the prevalence of older cows with asymmetric udders is most likely the fact that, farmers cull old cows with atrophy of a gland. This gives the misleading impression that the number of cows with atrophy is falling amongst the older cows (selection bias). The same could be the case when explaining the higher prevalence of older cows with deep udders, since they are the ones left in the herd. It is known that cows with deep udders have an increased risk of mastitis as theseThe only teat shape, which was influenced by both lactation stage and parity, is short teat shape. This is very plausible, as one must expect some effect of the milking machine action on the teat . SimilarVariability in results between farms may reflect different conditions for doing observations, rather than true differences in the states of certain conditions in the cows. The presence of mange may serve as an example of this. When making the observations in the milking parlour, some farms had a big shield behind each cow, to protect the milker from kicks and manure in case this was relevant. This big shield made it difficult to observe for mange, and thus gave the possibly inaccurate result that the mange status differed significantly between the participating farms.The prevalence of wounds, teat scarring and hardness of teats were also all found to be affected by the farm. Qualitative interviews with farmers reveal tSoiled legs, hock lesions, udder consistency, teat skin, and teat end callus are all highly influenced by the effect of animal. All these variables are linked to the direct reactions of the animal to the environment. For example teat end callus has been found by Neijenhuis to vary In conclusion, there seem to be agreement between biologically plausible causes and the significance level of the individual effects . This suggests that the variables may contribute an 'objective' view of the health status on the individual farm. Although there was overall general agreement amongst observers that the observations were easy to perform some of the variables may need a significantly improved training and description, e.g. photo guides, in order to be consistent between observers.Generally, the results are consistent and biologically sound. The observed changes following lactation stage, parity or both do point to the relevance of the variables in a clinical examination and point to the fact that judgements of what is 'normal' and what is 'healthy' need to be viewed with a certain flexibility and in a context of farm, animal, lactation stage and parity.Discussions based on this type of information, which cannot be obtained in any other manner, form an ideal 'meeting place' for farmer and veterinarian for making decision plans and strategies for changing health problems."} +{"text": "Although numerous epidemiologic studies now use models of intraurban exposure, there has been little systematic evaluation of the performance of different models.In this present article we proposed a modeling framework for assessing exposure model performance and the role of spatial autocorrelation in the estimation of health effects.We obtained data from an exposure measurement substudy of subjects from the Southern California Children\u2019s Health Study. We examined how the addition of spatial correlations to a previously described unified exposure and health outcome modeling framework affects estimates of exposure\u2013response relationships using the substudy data. The methods proposed build upon the previous work, which developed measurement\u2013error techniques to estimate long-term nitrogen dioxide exposure and its effect on lung function in children. In this present article, we further develop these methods by introducing between- and within-community spatial autocorrelation error terms to evaluate effects of air pollution on forced vital capacity. The analytical methods developed are set in a Bayesian framework where multistage models are fitted jointly, properly incorporating parameter estimation uncertainty at all levels of the modeling process.Results suggest that the inclusion of residual spatial error terms improves the prediction of adverse health effects. These findings also demonstrate how residual spatial error may be used as a diagnostic for comparing exposure model performance. In the ow rate) . The datInterest in assessing exposure at the intraurban scale has grown for a variety of reasons, including early evidence of the large adverse health effects that may emerge from this scale of analysis. For example, 2 and ultrafine particles, variation within cities may exceed variations among central monitoring locations in different cities. Earlier studies from the United Kingdom indicate 2- to 3-fold differences in NO2 within distances of \u2264 50 m of a major road the marginal benefit of moving from less to more refined exposure models, b) the specific contribution of spatial terms to reducing exposure error, and c) the role of uncertainty in health effects analysis.In the present article we build on epidemiologic, land use, air pollution, and emission data to produce estimates of long-term NO2 concentrations were measured at 233 homes of CHS children selected from 11 of the 12 communities model model and dist2 concentrations were measured at 233 homes of CHS children during one 2-week period in the summer and one 2-week period in the winter. Subjects were approximately 10 years of age at enrollment and between 14\u201317 years of age when the NO2 measurements were taken. Here, we focus on the relationship between exposure to NO2 and FVC, a standard spirometric measure of lung volume Yci denotes measurements of lung function (FVC); b) Zcij denotes observed subject-level outdoor NO2 exposure measurements; c) Xci denotes the \u201ctrue\u201d unobserved annual outdoor household-level NO2 exposure level; d) Pcj denotes season-specific central-site exposure; e) Wci denotes a vector of household-level NO2 exposure predictors, including distance to the nearest major road, categorized as distance to the nearest freeway based on the road buffer , traffic density within 150 m of subjects\u2019 locations, and predicted NO2 concentration from the CALINE4 model; f ) Vci is a vector of personal covariates that affect the lung function, specifically including age, sex, race/ethnicity, height, body mass index (BMI), cohort enrollment group, height, exercise, smoking behavior, asthma, and respiratory illness at the time of lung function measurements; g) Ac and Bc are the community-specific intercepts in the lung function and exposure models, respectively; h) sy,ci and sX,ci are in turn the within-community spatial errors for the lung function and the long-term NO2 exposure. All NO2 levels, both observed and unobserved, are on the log scale. This analytical framework consists of the following three-level hierarchical models, lung function (level 1), exposure (level 2), and measurement (level 3) models, respectively:Similar to recent studies , NO2 serus study , the uniXc. and Pc. are community-specific averages of Xci and Pcj. The community-specific intercepts Ac and Bc were further modeled as:where andSY,c and SX,c are between-community spatial errors for Equations 4 and 5, respectively. In addition, the terms eY,ci , eX,ci, eZ,ci , EAc, and EBc are assumed to be normally distributed random errors with zero means and variances \u03c3Y2, \u03c3X2, \u03c3Z2, \u03c3h2, and \u03c3k2, respectively. All the spatial error terms, sY,ci , sX,ci , SY,c, and SX,c , were based on a conditional autoregressive (CAR) model. A directed acyclic graph (DAG) for the overall model is illustrated in where sY,ci and sX,ci are assumed to follow a spatial distribution defined by the CAR model were used to produce the Thiessen polygons for each subject where each polygon contains exactly one individual. Thiessen polygons are defined by a set of \u201ccenter\u201d points where each polygon is defined as the set of all points that are closer to a particular center than any other center. Using these polygons, adjacency-based weight matrices were constructed.based on a weight matrix, Thiessen polygons were used as a first approximation of possible spatial autocorrelation in health and environmental data. Because there is little prior evidence available on the likely spatial associations among subjects, the first-order connectivity matrix based on nearest neighbor proximity is used. This is a common approach in studies when little is known about the spatial processes that generate similarity of attributes by proximity . The modSY,c and SX,c were assumed to follow a CAR model with elements of the weight matrix specified as the inverse of driving distance between two communities. Because the subjects in this study were living in separate, disjoint communities all within a relatively small area within Southern California (an area of about 500 km at its maximum distance), most subjects would travel from one community to another via automobile. Therefore, community-level spatial correlation is reasonably well estimated by the driving distance between the communities. These driving distances were obtained by taking the average distances to drive in both directions for each pair of communities. Each one-way driving distance was obtained from the online mapping site The between-community spatial residual error terms N priors, where \u03c4N denotes precision with \u03c4N = 10\u22124. All standard deviation parameters were given flat uniform priors, U with \u03c4U = 10. Throughout the analyses, all measures of NO2, both estimated and observed, distance to nearest freeway, and the predicted NO2 based on CALINE4, as well as the outcome, Yci, were measured on a log scale. The log transformation of the lung function outcome helps satisfy the normality assumptions of the model as was established in previous analysis of CHS data , resulting in a new exposure model in which a random town-level intercept term is the only nonresidual term used to predict long-term NO2. Subsequent models were formed by including combinations of relevant traffic-related parameters; namely, models were formed by including/excluding various combinations of covariates in the term Wci. All these models were fit with and without the presence of spatial error terms in order to examine the usefulness of various traffic-related covariates in explaining the extent to which the relationship of interest (lung function and NO2) varied spatially.Several different models were fit to the data to examine the effects of including various amounts of spatial information into exposure model (Equation 2). The \u201cbase\u201d model did not include any traffic-level exposure variables. In other words, For each model, we calculated the deviance information criterion (DIC) , which c2, meaning that higher air pollution exposure is associated with decreased lung function as measured by FVC. Models may be interpreted as log\u2013log elasticities, such that a value of \u22120.14 means that for every 10% increase in long-term NO2 exposure, there is a decrease of 1.4% in lung function. The posterior 95% credibility intervals for the effect of NO2 on lung function are consistently narrower in models that use spatial residual terms compared with models without spatial errors included. The point estimates are also consistently smaller in the spatial models. All models show a negative association between lung function and long-term exposure to NOeci replacing Sci . The variances of the community-level spatial and independent error terms across all subjects are defined to be the average across Gibbs samples of the within-community variances, namely,and these are then averaged across Gibbs samples; the variances of the independent errors are computed similarly with wherePosterior distributions are obtained for each of these community-specific parameters, and from these posterior means, each2 exposure than in modeling lung function. We have not reported results from the between-community spatial variances because these were very small.is obtained. It is evident from these figures that the spatial error terms were of much greater value in estimating long-term NO2 with observed seasonal and central-site averages. Although this figure displays only posterior averages of modeled exposure, the MCMC framework fully incorporates the uncertainty in these modeled estimates in the estimation of all model parameters.2 measurements made at community central site monitors and between lung function and local variation in traffic exposure . Comparing the results in 2 pollution is caused by near-source traffic emissions or consistent transport from neighboring communities.Comparison of In the health-plus-exposure models, there is heterogeneity in the residual variance between the communities. For example, in the health model, the communities of Lancaster, Atascadero, and Upland have the largest unexplained variance. These communities are in different locations some hundreds of kilometers apart. Thus there is no obvious underlying similarity or spatial pattern in how community location and characteristics influence the residual variation in lung function.2 in these locations and the associated lower range of exposure.In contrast, the exposure models perform much better in the inland areas of the Los Angeles Basin with respect to the magnitude of residual errors displayed in 2 exposure at homes not measured in the pilot study. To facilitate the prediction of exposure, this model assumes stationarity, in that the amount of spatial correlation between two points is simply a function of the Euclidian distance between the points. Because we are primarily interested in assessing the effect of exposure on lung function and not in spatial prediction, and because assumptions of stationarity would questionable in our context, we have decided against using this model here.Regarding spatial errors, one could use a Bayesian geostatistical kriging model of the form described in Through examination of DIC, spatial autocorrelation in the outcome and exposure, and the subsequent impacts on point estimates and credible intervals, we have developed a framework for assessing spatial exposure model performance. In most cases, we were able to improve the certainty of our health effects estimates with information on residual spatial autocorrelation, but these improvements were, as expected, more pronounced in models that contained less informative exposure information. Exposure models with small (good) DIC had relatively less improvement from additional spatial information. This finding suggests a more general approach for assessing model performance where the point estimates and confidence intervals are more robust to inclusion of additional information, probably because of less bias in the initial estimates from nonindependence in the observations, particularly from excluded exposure information. As noted below, the generalizability of these findings is limited by the sample size used, but this will be partly addressed in future research.There are limitations to this study that merit attention in future research. We have exposure information from only two 2-week periods in different seasons measured at the home. Although there are more field measurements than in most similar large epidemiologic investigations, it is possible that our estimates are not an accurate depiction of long-term exposure because of temporal variation in exposure. However, the measurement model (Equation 3) is not written in the way classic measurement error models are generally written, where observed measures of exposure are assumed to deviate around true unobserved exposure values with zero-error residuals. Instead, we have incorporated an extra term that calibrates local measurements for temporal variation as assessed by the central site measurements.Furthermore, the relatively small sample size, although drawn from a larger cohort, may not be representative of the general population or of the exposure experienced by the entire cohort. Other analyses suggested few significant differences between this sample and the larger cohort , but cauWe have collected subsequent information from over 1,000 locations in a related study over three seasons that will allow us to address the weaknesses described previously. Also, our unified modeling framework will allow us to combine information from the entire cohort, as individual-level exposures that may not exist in the larger cohort study but are present in the pilot study can be imputed in a way that fully utilizes all available covariate information. Because of the small sample within each community in the pilot study analyzed for this article, we were unable to evaluate other predictors of exposure based on other land uses , a methoHere we sought to examine how different models of intraurban air pollution exposure classify and predict FVC in an integrated Bayesian modeling framework. Building on the CHS , 2007 an"} +{"text": "Since the U.S. Environmental Protection Agency began widespread monitoring of PM2.5 levels in 1999, the epidemiologic community has performed numerous observational studies modeling mortality and morbidity responses to PM2.5 levels using Poisson generalized additive models (GAMs). Although these models are useful for relating ambient PM2.5 levels to mortality, they cannot directly measure the strength of the effect of exposure to PM2.5 on mortality. In order to assess this effect, we propose a three-stage Bayesian hierarchical model as an alternative to the classical Poisson GAM. Fitting our model to data collected in seven North Carolina counties from 1999 through 2001, we found that an increase in PM2.5 exposure is linked to increased risk of cardiovascular mortality in the same day and next 2 days. Specifically, a 10-\u03bcg/m3 increase in average PM2.5 exposure is associated with a 2.5% increase in the relative risk of current-day cardiovascular mortality, a 4.0% increase in the relative risk of cardiovascular mortality the next day, and an 11.4% increase in the relative risk of cardiovascular mortality 2 days later. Because of the small sample size of our study, only the third effect was found to have > 95% posterior probability of being > 0. In addition, we compared the results obtained from our model to those obtained by applying frequentist and Bayesian versions of the classical Poisson GAM to our study population.Considerable attention has been given to the relationship between levels of fine particulate matter (particulate matter \u2264 2.5 \u03bcm in aerodynamic diameter; PM Researchers have found that acute episodes of increased particulate matter (PM) are associated with nonaccidental mortality , total mIn attempting to explore the relationship between PM exposure and morbidity or mortality, care should be taken not to assume that the relationship between ambient levels and mortality implies a similar connection between exposure and mortality. It is well documented that ambient levels poorly approximate true exposure . One rec10) and mortality, an HEI study to mortality. At the next stage of the hierarchy, the latent exposure is related to ambient PM levels using a linear regression form. To provide information about the coefficients of the regression relating the latent exposure to ambient levels, Samet et al. hypothesized that the same linear form is appropriate for each of five exposure studies and linked the coefficients in each study and the Baltimore population together through another level in the hierarchy.In an effort to include exposure information in a model linking levels of PM \u2264 10 \u03bcm in aerodynamic diameter , average exposure to PM2.5, and cardiovascular mortality that incorporates an exposure simulator similar to SHEDS-PM. Unlike most studies, our model allows us to directly quantify the effect of exposure to PM2.5 on cardiovascular mortality. Bayesian hierarchical modeling is a framework that allows multiple data sources and statistical modeling techniques to be incorporated into a single coherent statistical model codes I00 to I99; World Health Organization 2.5 data for all available monitors in North Carolina during 1999\u20132001 were obtained from the U.S. Environmental Protection Agency (EPA) Aerometric Information Retrieval System/Air Quality Subsystem (AIRS/AQS) database linking monitor readings to ambient levels over the study region, b) linking ambient levels to exposure levels, and c) linking exposure levels to mortality without error each day. The first level of our model specifies the spatial distribution of PM2.5 and relates that distribution to readings taken at monitors on a single day.Central to our model relating PM levels to mortality is that, for any given day, a continuous surface of ambient PM2.5 and determined that PM2.5 exhibits strong spatial correlation over the region of interest T, MN is the multivariate normal distribution, Mt is a design matrix of covariates, \u03b8 is a parameter vector, and \u2211 is an n\u03c8 \u00d7 n\u03c8 spatial covariance matrix constructed using information from our exploratory spatial analysis of outdoor PM2.5 levels. For each site, s(1), \u2026 , s(n\u03c8), Mt includes a row with elements representing an overall mean, maximum temperature, average wind speed, and two sinusoidal terms that capture seasonal cycles. We considered the corresponding five regression coefficients, \u03b8 = , to be unknown, and we minimized prior influence by placing vague N priors on these parameters.We conducted a spatial analysis of PMs(1), \u2026 , s(n\u03c8) for which the spatial distribution of PM2.5 is estimated need not be locations with monitors. The matrices Mt and \u2211 are defined for any location in our modeled domain. In fact, in our implementation we modeled the spatial process at several locations that do not have monitors to better characterize the average ambient level over the entire spatial area of each county.The sites 2.5 monitors measure the ambient PM2.5 surface with some error (measurement error and other random sources of error) at their locations: Xt(s) | \u03c8t(s), \u03c3x2 ~ N, where Xt(s) is the monitor reading at monitoring site s at time t, \u03c8t(s) is the value of the ambient surface at the location of monitoring site s at time t, and \u03c3x2 is the variance of the measurement error. This construction automatically incorporates the additional uncertainty about the ambient PM2.5 surface on days when fewer monitors take readings. Days when more monitors take readings (every third or sixth day) will carry more information about the ambient surface than will days when only a subset of daily monitors takes readings, so our uncertainty about the ambient surface will be smaller on these days.In relating monitor readings to the ambient surface we have defined, we assumed that the PMx2, the variance of the measurement error at the PM2.5 monitors, precision and accuracy data were downloaded from the AIRS/AQS database (\u22123) for \u03c3x2. This prior was developed using a simple conjugate inverse-gamma/normal model , where Zct is the average exposure level in county c at time t, \u03c8\u0304ct is the average ambient level in county c at time t, \u03be(\u03c8\u0304ct) is the average exposure level predicted by the simulator in county c at time t as a function of the average ambient level, and \u03c3z2 is the variance of the error in the simulator. We place a uniform prior on \u03c3z2. Although there is not enough information in the data to estimate \u03c3z2 accurately, allowing it to be random incorporates our uncertainty in the simulator into the model resulting in more accurate uncertainty estimates at the third level.To account for possible discrepancy between the simulator predicted value of exposure and true exposure levels, we specified that the average exposure level in a given county is normally distributed around the \u2013value predicted by the simulator: 2.5 and mortality. Mortality was assumed to be Poisson distributed with a mean that depends on average PM2.5 exposure in the current and 3 previous days as well as the values of several confounders:In the third level of the model, we linked exposure directly to mortality using the Poisson GAM form commonly used in studies of the link between PMYct is the mortality in county c on day t, Ec is the expected daily mortality rate in county c , \u03bbct may be interpreted as a relative risk of death in county c on day t, \u03bc is an overall baseline relative risk of death in the study region over the time period studied, \u03b20, \u2026 , \u03b23 are parameters describing the influence of county-level average exposure on mortality rate, fp(Cpct) are transformations of confounding variables, and \u03b71, \u2026 , \u03b7P are parameters describing the influence of confounding variables on mortality. For our data set, confounding variables included a factor variable for the day of the week, a cubic spline transformation of time to account for long-term trends in cardiovascular mortality, a cubic spline transformation of maximum temperature, a cubic spline transformation of relative humidity, and cubic spline transformations of 1- to 3-day lagged values of maximum temperature and relative humidity. The cubic spline transformation of time included 21 evenly spaced knots, and the cubic spline transformations of maximum temperature and relative humidity each included five evenly spaced knots. The model was not assessed for sensitivity to the placement of these knot locations. We reparameterized the confounding variable term into a design matrix and coefficient vector (\u03b3), and we placed vague N priors on the coefficients. We also placed vague N priors on all of the \u03b2-parameters describing the strength of the relationship between PM2.5 exposure and cardiovascular mortality at different lags as well as on the overall mean relative risk parameter, \u03bc.where 2.5, the relationship between exposure and ambient levels, and the relationship between exposure and cardiovascular mortality simultaneously. In contrast, the hierarchical approach allows us to specify each level of the model conditionally independent of other levels and to combine the information at the end to obtain a joint distribution of all parameters. The third advantage is that elements of the hierarchy can be substituted without changing the overall form of the model. For instance, we could substitute a different exposure simulator in the second level of the model.Although we have introduced a three-level model, we emphasize that the three levels of the model are all fitted simultaneously as a single coherent statistical model. There are three main advantages to creating a hierarchical Bayesian model for solving such a complex problem. The most important advantage is that uncertainty in parameters is propagated throughout the model. For example, our uncertainty about the true ambient surface carries through to result in a corresponding level of uncertainty about the effect of exposure on cardiovascular mortality. The second important advantage of hierarchical Bayesian modeling is that it is simple to specify large, complex models using simpler statements about conditionally independent parameters. It would be impossible to specify the joint distribution of the thousands of parameters involved in our model if we tried to model the spatial properties of PMModel fitting was performed using a Markov chain Monte Carlo algorithm . The algThe marginal posterior distributions of several important parameters are summarized in 2.5 exposure on the relative risk of cardiovascular mortality. The posterior marginal expectations of the parameters indicate that a 10-\u03bcg/m3 increase in average PM2.5 exposure is associated with a 2.5% increase in the relative risk of current day cardiovascular mortality, a 4.0% increase (\u20133.3 to 12.2) in the relative risk of cardiovascular mortality the next day, an 11.4% increase (2.8 to 19.8) in the relative risk of cardiovascular mortality 2 days later, and a 1.1% decrease (\u20137.5 to 5.2) in the relative risk of cardiovascular mortality 3 days later. These rates were calculated by multiplying the \u03b2-value corresponding to the effect by 10 and exponentiating. Only the effect on the second day after exposure has a > 95% posterior probability of exceeding zero. Note that the estimates presented are marginal expectations and therefore cannot be added together in a meaningful way. The negative estimate on the third day might be considered an unexpected effect, but it does lend some support to the theory of harvesting , the point estimate is similar to the one obtained in our analysis.We are unaware of any other study that has attempted to directly estimate the effect of PMEI study . In that2.5 exposure on cardiovascular mortality, we can also address the effect of changes in the ambient level on the relative risk of cardiovascular mortality. To determine the relationship between ambient levels and relative risk induced by our model, we examined the joint posterior distribution of average ambient levels, \u03c8\u0304ct, and log relative risk, \u03bbct, on the same and closely following days. 3 increase in ambient level is associated with a 0.09% increase in the relative risk of cardiovascular mortality on the same day, a 0.2% increase the next day, a 1.0% increase 2 days later, and a 1.4% decrease 3 days later. As with the estimates of effect of exposure on cardiovascular mortality, these estimates are marginal effects and should be interpreted individually; they should not be combined to find an overall effect. These estimates tend to be lower than some comparable estimates reported in the epidemiologic literature. The effect of 2-day mean ambient levels on total mortality has been estimated at 3.3% for chronic obstructive pulmonary disease, 2.1% for ischemic heart disease . The remainder of the model is specified exactly as in our original Bayesian model. Summaries of the parameters of most interest, the \u03b2-parameters, appear in 2.5 exposure on the log relative risk of cardiovascular mortality, whereas the parameters in the other models relate ambient PM2.5 levels to the log relative risk of cardiovascular mortality.The second alternate model that we fitted replaces level 2 of our Bayesian model with a simplified exposure link. Rather than including an exposure simulator, we constructed alternate model 2 by hypothesizing that exposure is equal to the ambient level plus some error [i.e., The results from alternate model 1, the Bayesian model with no spatial interpolation or exposure link, are comparable with the results obtained by fitting the classical Poisson GAM in each of the three counties. This similarity gives evidence that the Bayesian approach produces results similar to those ordinarily obtained using the classical Poisson GAM approach. However, using a Bayesian model allows the incorporation of additional data sources and levels into the hierarchy, so the Bayesian model is more readily expanded.2.5 exposure, not ambient level, on mortality. The results from alternate model 2 are more comparable with those obtained from our full Bayesian model. This similarity indicates that our model is robust to our choice of exposure simulator. However, we do not conclude that the exposure simulator is unnecessary because increased accuracy of simulated exposures will lead to more accurate estimates of the effect of exposure on mortality.As expected, the results from alternate model 2 are different from the results obtained from the classical models and alternate model 1; alternate model 2 summarizes the effect of PM2.5 monitor readings and mortality into three intuitive levels, we have shown that elevated PM2.5 exposure is related to increased risk of cardiovascular mortality in the closely following days. We found that increases in the level of PM2.5 exposure are most closely related to increased relative risk of cardiovascular mortality 2 days later. In addition, we have demonstrated that the effect of increased levels of exposure on cardiovascular mortality is not equivalent to the effect of increased levels of ambient PM2.5 on cardiovascular mortality. Our results are similar to those reported in several studies lending additional support to our findings. In addition, we estimate that the association between ambient levels and relative risk of cardiovascular mortality on closely following days is lower than what has been previously reported in the literature.By constructing a hierarchical Bayesian model that divides the process linking PM2.5 values, and may introduce biases in estimation by assuming that the outdoor level is the same for each individual, calculating individual exposures, and then averaging across individuals (Despite the sophistication of our model, the second level of the model leaves room for improvement. A deficiency of the second level is the absence of real exposure data. Another limitation of the second level is the simplicity of our exposure simulator; our exposure simulator ignores changes in people\u2019s activity patterns over different days of the week and different seasons, uses fixed values to relate indoor and outdoor PMividuals .2.5 exposure and cardiovascular mortality.Future work on this type of model might focus on addressing the weaknesses in the second level of our model. For example, if real exposure data can be acquired, a data-driven version could be substituted without substantially changing the structure of the model. Similarly, a more complex exposure simulator that takes seasons and the day of the week into account could be substituted to improve the reliability of the results. Nonetheless, the results obtained by incorporating a simple exposure simulator into the model provide valuable insight into the relationship between PM"} +{"text": "For example, during 1 January 2002\u201311 February 2002 in the west area, the arithmetic mean exposure and an upper reported value (0.500 fibers/cm3-TLA) were above the PEL ( the PEL .Even with evidence of higher exposure levels, on the basis of reported data , it is u"} +{"text": "The effects of measurement error in epidemiological exposures and confounders on estimated effects of exposure are well described, but the effects on estimates for gene-environment interactions has received rather less attention. In particular, the effects of confounder measurement error on gene-environment interactions are unknown.We investigate these effects using simulated data and illustrate our results with a practical example in nutrition epidemiology.We show that the interaction regression coefficient is unchanged by confounder measurement error under certain conditions, but biased by exposure measurement error. We also confirm that confounder measurement error can lead to estimated effects of exposure biased either towards or away from the null, depending on the correlation structure, with associated effects on type II errors.Whilst measurement error in confounders does not lead to bias in interaction coefficients, it may still lead to bias in the estimated effects of exposure. There may still be cost implications for epidemiological studies that need to calibrate all error-prone covariates against a valid reference, in addition to the exposure, to reduce the effects of confounder measurement error. One of the largest difficulties facing epidemiological research is that of measurement error in an exposure or relevant confounders -4. MeasuThe source of measurement error may occur in the assessment tool used to determine the extent of exposure or dietary confounder. For example, food frequency questionnaires may use crude measures of portion size, frequency of consumption, and use broad food groupings, which all limit the precision with which dietary intake can be estimated. In addition, the source of error could be random variation in the exposure attributable to chance fluctuations, and not dependent on the assessment tool. In this way natural variation in individuals' diets from day-to-day and week-to-week could lead to random error in estimating long-term dietary intake. For example, a food diary or a series of 24 hour recalls may record actual intake more precisely than a food frequency questionnaire (FFQ), but only represents a short period of time so will lack precision compared to true long-term intake. Another source of error could be related to the individual completing the dietary assessment, leading to a person-specific bias and measurement errors in two instruments being correlated -21.One area of epidemiology receiving increasing attention is that of the gene-environment interaction. The researcher is often interested in whether an epidemiological exposure has a different effect dependent on an individual's genotype. Alternatively, they may want to identify groups, identifiable on the basis of genotype or phenotype, at greater risk from a particular exposure. One type of gene-environment interaction that can be investigated is the gene-diet interaction, where the environmental exposure is a particular dietary intake. Whilst the effects of measurement error on estimation procedures such as linear regression are well known for main effects, the influence of errors on estimation of interaction terms is not well documented. In particular, the effect of measurement error in confounding variables on a statistical interaction is unknown.We aim to characterise the impact of measurement error in an exposure and in a confounder in the estimation of both main effects as well as their interaction. We present a series of simulations demonstrating the effect of measurement error in a variety of situations. We illustrate our findings with a recent cohort study where we investigate the relationship between HFE genotype for haemochromatosis (iron overload), diet, and serum ferritin concentrations .X, and its surrogate, W, measured with error U under the classical additive measurement error model such that W = X + U. We assume X~N, U~N, and that given X, W contributes no additional information about the outcome, Y. This means that, in terms of conditional probability distributions, f = f(Y|X). In addition we represent the genotype, G, as coded 1 for homozygotes and 0 for heterozygotes and wild types, where G~bernoulli(p). We assume p = 0.2. We generate a potential confounding variable, C, such that C~N, corr = \u03c1xc, corr = \u03c1yc, and C's surrogate, D, is measured with error such that D = C + V, where measurement error V~N. For each scenario, we generate n observations such that Y = \u03b20 + \u03b21G + \u03b22X + \u03b23 G.X + \u03b24 C + \u03b5, where \u03b5 represents residual error. For the purposes of estimating standard deviations of estimates and the probability of rejecting the null hypothesis H0, we assume the residual error \u03b5~N. Parameters were chosen to give reasonable R2 values approximately in the range 10\u201325%, based on experience in the UK Women's Cohort varied with confounder measurement error. Since measurement error in the confounder has no noticeable effect on either the estimate of the interaction coefficient or the empirical standard deviation of the estimates, the power for this assessment is unaffected. This also holds for the ratio estimate of interaction. Monte Carlo error was 1% of the empirical standard deviation of the estimates, giving adequate precision in the estimates to two decimal places.When the exposure and error-prone confounder are positively correlated exposure and confounding variables was 0.15, but the correlation between their predicted true values from the regression calibration was 0.20. Before considering the effect of the confounder , ignoring measurement error in the exposure (haem iron intake) leads to the exposure effect being underestimated by approximately 20% and the interaction with genotype being underestimated by 15%, compared to adjustment for measurement error using regression calibration We have assumed that the genotype is independent of the exposure and confounder, including independence from the exposure variance and exposure error variance. This is not an unreasonable assumption in most epidemiological settings because it is unlikely that genotype will influence an environmental exposure such as dietary intake, or an environmental confounder that is associated with the exposure and the outcome. Similarly, other potential confounders such as age or sex are unlikely to be related to most genotypes under study. However, this assumption must hold for these results to be valid.W = a + bX + U, where a indicates the component of bias in the measured W and b a component of attenuation multiplying exposure X. Whilst regression calibration is able to estimate E(X|W) providing an adequate validation measure is available (e.g. a biomarker for the exposure), the combined effects of the different sources of mis-measurement will be more complicated than those described in this paper.(ii) We have also assumed a simple random error model. In nutrition it is quite common for a dietary assessment tool to measure diet with a component of bias and attenuation in addition to random error, such that (iii) A further assumption is that there is no genotype by confounder interaction. If this were the case then confounder measurement error would influence the estimate of the genotype by exposure interaction.1 and 3 are affected by measurement error in the confounder because of the non-identity link function.(iv) For logistic regression with a binary outcome, the estimated coefficients (v) Any measurement error in the genotype will add additional error in the manner of any other exposure, biasing the estimate of the interaction effect.The suggestion that confounder measurement error has no effect on the estimate of the interaction term under the conditions outlined above does not detract from the impact it may have on other estimates. Confounder measurement error leaves residual confounding that may have a substantial impact on the estimated effect of correlated covariates.One way to view the effect of confounder measurement error on the estimated interaction effect is to consider the interaction term as allowing the exposure effect to vary across two subgroups defined by genotype (e.g. carriers and non-carriers). The interaction term measures the difference in exposure effect between the two subgroups. Measurement error in a confounder biases the effect of exposure to the same extent in each subgroup, and therefore does not alter the estimated interaction term. If a situation arose in which confounder measurement error differed across the subgroups, perhaps through different data collection procedures, then this would lead to confounder measurement error biasing the estimated genotype by exposure interaction.Many exposures in nutrition epidemiology have much greater measurement errors associated with them than those in our illustration. Reliability ratios are commonly in the region of 0.3 to 0.5, and even these may underestimate the magnitude of the problem; ratios in the order of 0.1 or 0.2 may be more realistic when derived from models calibrating measured intake against biomarkers ,40.Estimated coefficients for the main effects cannot be assumed to be conservative and only attenuated towards the null in the presence of measurement errors, since errors in confounders may lead to bias in either direction. Measurement error has a more predictable effect on interaction coefficients, which are generally biased towards the null by random measurement error in exposure variables though unaffected by random confounder measurement error in linear regression when genotype can be assumed error-free and independent of exposure and confounder. Despite this, when designing studies where covariates are anticipated to contain measurement error, it is important not only to estimate the measurement error variance of the exposure, but also the measurement error structure of potential confounders. This may have cost implications for large cohort studies where repeated measurements, more labour intensive instruments, or biomarkers may be needed for a large subsample in order to provide adequate precision to adjusted estimates.The study of haemochromatosis was funded by the UK Food Standards Agency. The UK Women's Cohort Study was funded by the World Cancer Research Fund. Apart from this, no authors have any competing interests.DCG had the original idea, designed, conducted and interpreted the simulations and analyses, and wrote the first draft. All authors contributed to further discussion, contributed to subsequent drafts, and approved the final version.The pre-publication history for this paper can be accessed here:"} +{"text": "Case studies and anecdotal reports have documented a range of acute illnesses associated with exposure to cyanobacteria and their toxins in recreational waters. The epidemiological data to date are limited; we sought to improve on the design of some previously conducted studies in order to facilitate revision and refinement of guidelines for exposure to cyanobacteria in recreational waters.2/mL), medium (2.4\u201312.0 mm2/mL) and high (>12.0 mm2/mL) levels of cyanobacteria in lakes and rivers in southeast Queensland, the central coast area of New South Wales, and northeast and central Florida. Multivariable logistic regression analyses were employed; models adjusted for region, age, smoking, prior history of asthma, hay fever or skin disease (eczema or dermatitis) and clustering by household.A prospective cohort study was conducted to investigate the incidence of acute symptoms in individuals exposed, through recreational activities, to low agreed to participate and 1,331 (37%) completed both the questionnaire and follow-up interview. Respiratory symptoms were 2.1 (95%CI: 1.1\u20134.0) times more likely to be reported by subjects exposed to high levels of cyanobacteria than by those exposed to low levels. Similarly, when grouping all reported symptoms, individuals exposed to high levels of cyanobacteria were 1.7 (95%CI: 1.0\u20132.8) times more likely to report symptoms than their low-level cyanobacteria-exposed counterparts.2/mL could result in increased incidence of symptoms. The potential for severe, life-threatening cyanobacteria-related illness is likely to be greater in recreational waters that have significant levels of cyanobacterial toxins, so future epidemiological investigations should be directed towards recreational exposure to cyanotoxins.A significant increase in reporting of minor self-limiting symptoms, particularly respiratory symptoms, was associated with exposure to higher levels of cyanobacteria of mixed genera. We suggest that exposure to cyanobacteria based on total cell surface area above 12 mm Planktonic cyanobacteria are common inhabitants of freshwater lakes and reservoirs throughout the world. Under favourable conditions, certain cyanobacteria can dominate the phytoplankton within a waterbody and form nuisance blooms. The principal public health concern regarding exposure to freshwater cyanobacteria relates to the understanding that some blooms produce toxins that specifically affect the liver or the central nervous system. Exposure routes for systemic poisoning by these toxins are oral, from accidental or deliberate ingestion of recreational water, and possibly by inhalation.et al [et al [A small collection of case reports and anecdotal references dating from 1949 have described a range of illnesses associated with recreational exposure to cyanobacteria: hay fever-like symptoms, pruritic skin rashes and gastro-intestinal symptoms are most frequently reported. Some papers give convincing descriptions of allergic responses to cyanobacteria; others describe more serious acute illnesses, with symptoms such as severe headache, pneumonia, fever, myalgia, vertigo and blistering in the mouth. Anecdotal and case reports and the epidemiology of recreational exposure to freshwater cyanobacteria were recently reviewed by Stewart et al -4, a smaet al , and a let al . The UK l [et al reportedDespite this limited and inconclusive evidence, the World Health Organization (WHO), Australia and several European countries have recommended guideline levels for recreational exposure to cyanobacteria [ quantify cyanotoxins in designated water recreation sites, and 2) assess the relationship between exposure to cyanobacteria and cyanotoxins in recreational waters and the incidence of reported symptoms.The study population of interest comprised adults and children engaging in recreational activities in enclosed waters (i.e. not marine waters) inhabited to varying degrees by planktonic cyanobacteria. Subjects were recruited over a three-year period from 1999 to 2002 at water recreation sites in southern Queensland and the Myall Lakes area of New South Wales , and northeast and central Florida (USA). Recruitment was conducted on 54 separate days, mostly on weekends and holiday periods during the warmer months in order to maximise recruitment efficiency by concentrating on peak-use periods of recreational activity.Entry criteria into the study were twofold:\u2022 Engaging or planning to partake in water-contact activities in the study water body on the day of recruitment \u2013 ascertained by asking \"Is anybody in the vehicle planning to go in the water and get wet here today?\"\u2022 Able to be contacted by phone for follow-up.Study subjects were enrolled at the water sites and asked to complete a self-administered questionnaire before leaving for the day. They were also asked to submit to a telephone follow-up interview to be conducted as soon as practicable after three days from the day of enrolment. The interviewers asked to speak to study subjects within each household individually, i.e. proxy interviewees were discouraged. Exceptions were made in the case of children, where a parent or guardian was asked to decide whether or not their child would participate in the follow-up directly.The questionnaire, follow-up interview form and information letter are available in Stewart [ from management authorities to recruit members of the public into this study was sought and secured for all sites listed in Table Table Water samples for phytoplankton and cyanotoxin analysis were collected by a modified grab sample method. Polypropylene sample bottles were used to collect water at a depth of approximately 70 cm; the modification involved moving the sample bottle up and down in a vertical plane to sample water through the entire column in order to avoid spurious cyanobacteria estimates through sampling only surface water. In an attempt to address temporal and spatial heterogeneity of cyanobacteria profiles within each waterbody, samples were collected from between one and four locations on each recruitment day, depending on the size of the site. Samples were collected in the morning and afternoon. All samples were kept on ice, in darkness, and equal volumes were then pooled prior to leaving the site to form a composite sample. Composite samples were immediately fixed with Lugol's iodine, then stored at 4\u00b0C until examined. Separate water samples were collected for cyanotoxin analysis; these samples were also stored at 4\u00b0C but were not fixed.Sub-surface samples for faecal coliform analysis were collected in 250 mL sterile containers shortly before departing each site; containers were immediately placed on ice, and stored at 4\u00b0C until analysed. Due to logistical issues, faecal coliforms were sampled only when a recruitment visit was followed by a routine working day. Of the 54 study sampling days, coliform sampling was conducted on 21 days .Total phytoplankton analyses were conducted at three separate laboratories due to contractual obligations of the various agencies that funded this work: Queensland Health Scientific Services, Brisbane for all Queensland samples; Australian Water Technologies, West Ryde, NSW (NATA accredited) for all Myall Lakes area samples; CyanoLab, Palatka, Florida for all Florida samples.\u03c0r2 , or S.A. = 2(\u03c0r2) + (2\u03c0r)l where v = cell volume; r = cell radius; l = cell length; S.A. = cell surface area. Data for each cyanobacterial taxon were summed, and total cyanobacterial cell surface area was used as the measure of exposure for each recruitment day in subsequent statistical analyses.Cell identification and enumeration at these three centres were conducted by broadly similar methods, using a calibrated counting chamber with phase-contrast microscopy. Cell surface areas were determined by defining cyanobacteria cells as spherical or cylindrical, then measuring cell diameter and length . An appropriate number of cells were measured, and then averaged to give dimensions for each cyanobacterial taxon in each water sample. Cell surface areas were calculated using the formulas S.A. = 4Samples that contained potentially toxic cyanobacteria were analysed for specific cyanotoxins:Microcystis spp, Anabaena spp, Planktothrix spp,\u2022 Microcystins : Anabaena circinalis\u2022 Saxitoxins : Cylindrospermopsis raciborskii, Aphanizomenon ovalisporum\u2022 Cylindrospermopsin: Anabaena spp, C. raciborskii\u2022 Anatoxin-a: (Florida only): et al [et al [Australian samples were analysed at Queensland Health Scientific Services laboratories. Saxitoxins were analysed by high performance liquid chromatography (HPLC) with fluorescence detection using a Shimadzu LC-10AVP system based on the methods of Lawrence l [et al . Cylindrl [et al . In FlorEscherichia coli \u2013 membrane filtration method); Centre for Integrated Environmental Protection, Griffith University, Brisbane, QLD: method # APHA 9222D (APHA membrane filtration method); Forster Environmental Laboratory, Forster, NSW (NATA accredited): method # APHA 9222D; Columbia Analytical Services, Jacksonville, Florida (NELAC accredited): method # SM 9222D (USEPA Standard Method \u2013 membrane filtration method).All faecal coliform samples were analysed within 24 hours following collection. Samples were analysed at the following laboratories: Queensland Health Scientific Services, Brisbane, QLD (NATA accredited): method # AS 4276.7 . In Florida, conductivity was recorded with a DataSonde MP 6600 .2/mL), intermediate (2.4\u201312.0 mm2/mL) and high (>12.0 mm2/mL) based on guidelines from the Queensland Department of Natural Resources and Mines [[Cyanobacterial cell surface area was chosen as the principal exposure variable of interest [ . Once the most parsimonious main-effects model was identified, all two-factor interactions were introduced into the model and stepwise elimination of non-significant terms was undertaken (again based on the model deviance statistic) until the final model was obtained. The final model adjusted for age, sex, smoking and reported prior history of asthma, hay fever or eczema. A second multivariable model was developed for the \"any symptoms\" outcome by excluding subjects who reported exposure at the study waterbody in the five-day period prior to recruitment, as per the work of Pilotto et al . SPSS v1et al , Epi Infet al and Statet al were useThe study entry criteria were met by 3,595 individuals; of these, 402 (11%) refused to participate in the study. Of the 3,193 people who accepted a questionnaire, 1,371 (43%) returned it. Of these, 40 individuals did not complete the follow-up interview for various reasons . The 1,331 subjects with follow-up data thus represented 42% of those who initially accepted a questionnaire. Demographic features of the cohort are shown in Table Table Table Analysis of cyanotoxins in study waters showed that these were infrequently seen and, when seen, were at low levels. Microcystins were only detected on two occasions, at 1 \u03bcg/L (Doctors Lake) and 12 \u03bcg/L (Lake Coolmunda); cylindrospermopsin was found on seven occasions , but the levels were low at 1 \u03bcg/L and 2 \u03bcg/L. Saxitoxins were not seen in this study, and anatoxin-a was only detected at one Florida site (Lake Seminole) on a single recruitment day, at 1 \u03bcg/L. A statistically significant increase in symptom reporting amongst Florida subjects exposed to anatoxin-a was found by the Fisher-Freeman-Halton test (p = 0.04), but the number of subjects exposed (n = 18) was very low.No relationship was seen between faecal coliform counts in study waters and symptom reporting: G-I symptoms (p = 0.50), respiratory symptoms (p = 0.92) and the pooled \"any symptom\" category (p = 0.96). Therefore we have no evidence that observed variation in symptom reporting could be attributed to differential exposure to enteric pathogens. However, our ability to monitor all recruitment days (mostly conducted on weekends and public holidays) for faecal coliforms was limited because the of the 24-hour maximum allowable time between sample collection and testing.2/mL were more likely to report symptoms, particularly respiratory symptoms, after exposure than those exposed to waters where cyanobacterial cell surface areas were less than 2.4 mm2/mL. The measured effect size was similar but non-significant for ear and cutaneous symptoms, fever and all symptoms after exclusion of subjects with prior site exposure, which suggests that the sample sizes were too small to show significant differences within these categories. No relationship was detected between exposure to intermediate levels of cyanobacteria and symptom reporting.The main findings of this work were that individuals exposed to recreational waters from which total cyanobacterial cell surface areas exceeded 12 mmAlthough the symptom category that appeared to be weighting the pooled \"any symptom\" category was that of respiratory symptoms, from Table et al [et al [This study attempted to improve on some study design weaknesses of previously published work in this field. The control group was recruited at waters known or suspected to be substantially free of cyanobacteria. We were concerned that the control subjects (i.e. non-bathers) in the studies of Pilotto et al and Philet al , Philippet al and Phill [et al might dil [et al . There il [et al .We also measured cyanotoxins in study waters directly by HPLC-based methods. In previous studies cyanotoxins were either not considered or indirect and unquantified measures of cyanotoxin presence were used. However, the cyanotoxins were infrequently seen at study waters and, where seen, were at universally low levels. While we observed a significant increase in symptom reporting amongst Florida subjects exposed to anatoxin-a, the number of subjects exposed was very low, so we were reluctant to draw any conclusions from this finding. The infrequent presence and low concentrations of cyanotoxins in study waters highlights one of the disadvantages in conducting a prospective cohort study, that cyanobacteria and especially cyanotoxin levels are often dynamic and therefore unpredictable.We chose a biomass estimate \u2013 cell surface area \u2013 to determine exposure to cyanobacteria, rather than the traditional reporting method of cell counts per unit volume of water [ to increase participation in this study, the target population was inherently difficult to capture as most were healthy, young and busily engaged in leisure activities. The relatively low response rate (42%) means that the sample may become less representative of the wider population. The overall response rate also varied across the exposure groups with only 30% of eligible subjects returning questionnaires at high exposure sites compared to 43% and 44% of those at intermediate and low cyanobacteria sites respectively (p < 0.001). This difference was due to a particularly poor response from high exposure sites in Florida (27%). Some peculiar features of these sites in Florida probably contributed to the response rate, e.g. lack of swimming beaches (resulting in over-reliance on subjects using powered watercraft) and increased demand for limited parking spaces [ . Overall, 80% of highly exposed subjects but only 10% of the low exposure group came from Florida. In addition, symptom reporting was considerably lower among Florida respondents than in Australia . Although we adjusted for region in our analyses, any residual confounding by this variable is likely to have weakened the true association. Of note, when we adjusted for important factors in our multivariable models, the symptom effect sizes associated with cyanobacteria exposure were strengthened slightly, suggesting that the associations seen are unlikely to be due to confounding. Although it is impossible to rule out other unknown confounders these would have to be strongly associated with both exposure and symptoms in order to completely explain the effects. We believe it unlikely that such strong confounders exist, nonetheless the possibility remains that unmeasured confounding variables may explain our findings.This study has shown that subjects exposed to high levels of cyanobacteria in recreational waters, as measured by total cell surface area, were more likely to report symptoms following such exposure than subjects exposed to low levels of cyanobacteria. Respiratory symptoms were most evident, and the reported severity of symptoms across all groups was low. Cyanotoxins, when detected in water samples, were present only at low concentrations throughout the course of the study. Further work quantifying the relationship between cyanotoxin levels and health outcomes should be considered. The potential remains for significant morbidity and possibly even mortality associated with recreational exposure to cyanotoxins, these being highly potent water-soluble toxins.APHA American Public Health AssociationG-I Gastro-intestinalHPLC High performance liquid chromatographyHPLC-MS/MS HPLC + tandem mass spectrometryNATA National Association of Testing Authorities, AustraliaNELAC National Environmental Laboratory Accreditation Conference (USA)OR Odds ratioUSEPA U.S. Environmental Protection AgencyWHO World Health OrganizationJWB was the director of CyanoLab and a former employee of St Johns River Water Management District at the time of field recruitment in Florida. No other authors have any competing interests.IS, PMW and GRS initiated the study conception and design. IS conducted field recruitment, water sample collection, data entry and manipulation and drafted the manuscript. IS and LEF conducted follow-up interviews. IS and PJS conducted statistical analyses. JWB, LEF, MG and LCB were involved in planning, logistics and site selection for recruitment of Florida subjects. GRS, PMW, PJS and LEF supervised the project. All authors participated in redrafting the manuscript. All authors read and approved the final manuscript.The pre-publication history for this paper can be accessed here:"} +{"text": "Several studies have reported significant health effects of air pollution even at low levels of air pollutants, but in most of theses studies linear nonthreshold relations were assumed. We investigated the exposure\u2013response association between ambient particles and mortality in the 22 European cities participating in the APHEA project, which is the largest available European database. We estimated the exposure\u2013response curves using regression spline models with two knots and then combined the individual city estimates of the spline to get an overall exposure\u2013response relationship. To further explore the heterogeneity in the observed city-specific exposure\u2013response associations, we investigated several city descriptive variables as potential effect modifiers that could alter the shape of the curve. We conclude that the association between ambient particles and mortality in the cities included in the present analysis, and in the range of the pollutant common in all analyzed cities, could be adequately estimated using the linear model. Our results confirm those previously reported in Europe and the United States. The heterogeneity found in the different city-specific relations reflects real effect modification, which can be explained partly by factors characterizing the air pollution mix, climate, and the health of the population. Many epidemiologic studies in recent years have documented adverse effects of ambient particulate matter (PM) concentrations on mortality . The indRecently, multicity national or international programs have provided results based on data from many cities . CombineIn the United States, several multicity studies have explored the exposure\u2013response association between particulate air pollution and mortality . A lineaOne key limitation of these European studies has beenInternational Classification of Diseases, 9th Revision that studied health effects of air pollution. Data were collected on daily counts of all-cause mortality , \u03d5 being the overdispersion parameter; xitc is the value of the xi meteorologic ovariate on day t at city c;Ptc is the air pollution level on day t at city c;fc is the function defining the exposure\u2013response relation between the pollutant and the health outcome; and \u03b2oc represents the baseline mortality in city c. The smooth functions s capture the nonlinear relationship with covariates and can be defined as a linear combination of a set of functions {bj} with convenient properties; that is, s = \u2211jajbj , which allow non-parametric smooth functions to control for possible confounders, was a standard approach on air pollution time series analysis. Recently, k in Equation 1 denotes the number of basis functions used for the corresponding variable fit. The choice of a small number of basis functions can have a substantial effect on the final model, because it places an upper bound on how variable the solution can be. Given our experiences from the previous analyses of the APHEA-2 data, we chose the number of basis functions (k) to be 40 for the time variable and 10 for the weather variables. We then chose the smoothing parameters that minimized the absolute value of the sum of partial autocorrelations (PACs) of the residuals from lags 3 to 30 days. The choice of lags was based on the fact that in mortality health outcomes there was usually strong remaining PAC in the first two lags of the residuals, which could influence the sum disproportionally. To account for serial correlation in the cases that it remained in the final model residuals, we added autoregressive terms into the model, based on the methodology described by We followed the general methodologic guidelines developed within the framework of the APHEA-2 project, described in detail elsewhere . The basDay of the week effects, holidays, and epidemics were controlled for by using dummy variables. We used the APHEA-2 method for influenza control, including a dummy variable taking the value of one when the 7-day moving average of the respiratory mortality was greater than the 90th percentile of its city-specific distribution. Because influenza control as described was based on the distribution of respiratory mortality, we included the influenza dummy variable only when we analyzed total and cardiovascular mortality. Based on previously published results , there if in Equation 1. The regression cubic spline function of a variable P is on city-specific covariates (Zc) to obtain the overall exposure\u2013response vector of the five spline estimates in each city c (the intercept term in Equation 2 was ignored because only relative risks are considered); Zc is a 5 \u00d7 5p matrix, where p is the number of city level covariates for city c (including the intercept); \u03b1 is the vector of regression coefficients to be estimated; \u03b4c is a vector of five random effects associated with city c representing, for each spline estimate, the city\u2019s deviation from the overall model; and \u025bc (assumed independent from \u03b4c) is the vector of sampling errors within each city.where \u03b2c) = D represents the within-city covariances of the random effects capturing determinants of the city-specific regression coefficients other than sampling error and the city-level covariates considered. It is assumed that \u03b4c follows the multivariate normal distribution (MVN) with mean 0 and variance-covariance matrix D\u2014that is, \u03b4c ~ MVN , and \u025bc ~ MVN , \u03b2c ~ MVN where Sc is the covariance matrix of the five regression coefficients of the spline function in city c that is estimated in the first stage of the analysis. When D \u2248 0 we get the corresponding fixed effects estimates, whereas when D \u2260 0 we get the random effects estimates.The 5 \u00d7 5 matrix cov to compaAs an alternative way to compare the two approaches\u2014the linear and spline models\u2014we computed the difference between the deviances of the fitted models. This difference follows a chi-square distribution with degrees of freedom the difference in the degrees of freedom of the fitted models. For an overall comparison of the different models, we computed the sum of the city-specific differences in deviance, which again follows the chi-square distribution with degrees of freedom the sum of the city-specific difference in the degrees of freedom.There was significant heterogeneity for all pollutant\u2013mortality relationships under investigation. Although the observed heterogeneity was either explained or substantially reduced when we investigated the effect modification patterns, all results presented are from the random effects models for consistency reasons. When there was no significant heterogeneity left, results from the fixed-effects models were almost identical to those obtained under the random effects models.10 and total, cardiovascular, and respiratory mortality and their 95% confidence intervals (CIs). Not all cities have values for the pollutant at both ends of the distribution, which is obvious from the wide CIs in the end points of the data. Excluding Stockholm, Sweden, from the analysis, which is the city with the lowest values, the resulting curves were almost identical. Within the range of 36 to 83 \u03bcg/m3\u2014that is, the common range of the pollutant levels across the analyzed cities\u2014the combined exposure\u2013response curves could be adequately approximated by a linear association. Although all three curves are similar in that range, a steeper slope is indicated for cardiovascular mortality. Overall, for total and cardiovascular mortality, the spline curves are roughly linear, consistent with the absence of a threshold. The curve for respiratory mortality suggests that a threshold model might be reasonable. The downward curve for the exposure\u2013response relationship between respiratory mortality and PM10 in the lower end of the distribution of the pollutant is also evident in most of the city-specific exposure\u2013response curves. In the case of total or cardiovascular mortality, this shape is evident in only about five (out of the 22) cities, whereas a linear or logarithmic shape is evident in about half of the analyzed cities. Based on the estimated overall exposure\u2013response curves, an increase from 50 to 60 \u03bcg/m3 is associated with an increase of about 0.4% in total deaths and with increases of about 0.5% in both cardiovascular and respiratory deaths. These are consistent with the results from regressions assuming a linear relation giving an estimated increase of about 0.5% for total mortality and 0.7% for cardiovascular and respiratory mortality, for a 10-\u03bcg/m3 increment in PM10.10, the spline curves are roughly linear, consistent with the absence of a threshold. In the case of BS, though, the association is steeper between respiratory mortality and the pollutant. This is consistent also with the results assuming a linear association, which indicate a higher increase for respiratory mortality. The bump in the exposure\u2013response relation between respiratory mortality and PM10 is not so apparent in the case of BS. Nevertheless, in the lower end of the distribution of the pollutant this association shows a small curvature not observed with the other two outcomes; hence, there is suggestion of a possible threshold.We examined the hypothesis of linearity in the pollutant\u2013mortality relation more formally by comparing the AIC values obtained under the linear and the spline models. In all cases, both models gave very similar AIC values. Overall the linear model gave a slightly better fit, because the AIC was lower by about 0.1% in all pollutant\u2013mortality combinations. On the other hand, the deviance under the spline model was smaller. In all pollutant\u2013mortality relations, apart from respiratory mortality and BS for which no significant departures from linearity were observed, the overall difference in the deviance between the linear and the spline models was statistically significant, whereas the great majority of the city-specific differences in the deviance of the two models was not statistically significant and in accordance with the findings from the AIC.3, and the results were largely similar to the ones presented.We further tested the sensitivity of the results to the number and location of the knots of the spline specification. We re-ran the analysis by specifying one knot at 40 \u03bcg/m10 and respiratory mortality, we applied threshold models with a threshold level at 20 \u03bcg/m3, because this was indicated by the pooled spline curves. The model comparisons between the linear and the threshold models, based on both the AIC and the difference in the deviance, always chose the linear exposure\u2013response model.To further explore the indication of a threshold, especially in the case of the association between PM3), we also fitted threshold models after excluding data at concentrations > 50 \u03bcg/m3. We tried two threshold models defining the threshold level at 20 and 10 \u03bcg/m3 because those were indicated by our spline analysis. In any case, the linear models gave a better fit.To contribute to the ongoing discussion on whether there is a threshold below current limit values ranged from 26 \u03bcg/m3 in Stockholm to 94 \u03bcg/m3 in Milan, Italy, and the standardized mortality rate ranged from 430 in Tel Aviv to 1,231 in Lodz, Poland (2 24 hr levels differed between the southern and other cities. The highest correlations (Spearman r = 0.86) were observed between temperature and mean NO2 24-hr levels in the cities that provided BS data.We investigated the observed heterogeneity by taking into account the potential effect modifiers through second stage regression models. Potential effect modifiers used in the APHEA-2 analysis included variables describing the air pollution level and mix in each city, the health status of the population, the geographic area, and the climatic conditions . We pres, Poland . The temEach of the presented effect modifiers explained in most cases > 20% of the observed heterogeneity. We present the exposure\u2013response curves as observed in the three distinct geographic regions included in the analysis . We also present the exposure\u2013response curves as shaped for cities with corresponding levels of the presented effect modifier equal to the 25th and the 75th percentile of the distribution of the relevant effect modifier.10 and total mortality. The exposure\u2013response curves for the western and southern cities are similar, although the latter is steeper. The corresponding curve for the eastern cities is very steep in the lower end of the pollutant distribution\u2014that is, at levels < 30 \u03bcg/m3. However, the minimum value for the pollutant in those areas is 10 \u03bcg/m3, so in fact the part of the curve below that point is an extrapolation, whereas between 10 and 30 \u03bcg/m3 only a small proportion of the total data contribute to the estimation, making estimates unstable. The remaining effect modification patterns indicate that the effect of the pollutant on mortality is greater in areas with higher temperature and mean NO2 (24-hr) levels, and lower standardized mortality rate. These results are in agreement with those observed when a linear association of PM10 and total mortality is assumed distribution is steeper from the level of 50 \u03bcg/m3 until the level of approximately 150 \u03bcg/m3, whereas in the range from 20 to 50 \u03bcg/m3 the slope of the curve corresponding to the 75th percentile is steeper. The curves corresponding to the effect modification by temperature levels are similar, although, as before, in the lower level of the pollutant distribution the slope corresponding to higher temperature is steeper, and in the higher level of the pollutant the slope corresponding to lower temperature is steeper. The effect modification pattern of the standardized mortality rate indicates a steeper slope for higher ratios, except for the range of the pollutant from about 20 to 50 \u03bcg/m3, where the slope corresponding to lower ratios is steeper.When we investigated the heterogeneity of the relation between PM10. Apart from the edges, the exposure\u2013response curves for the western and eastern cities are similar, although the latter is slightly steeper. The corresponding curve for the southern cities indicates the strongest effect of the pollutant on mortality. The other effect modification patterns indicate that the effect of the pollutant on mortality is greater in areas with higher temperature levels and mean NO2 (24-hr) levels and lower standardized mortality rates. These results are in agreement with those observed when a linear association of BS and total mortality is assumed distribution was steeper up to approximately 30 \u03bcg/m3, and above that the slope of the curve corresponding to the 75th percentile was steeper. Similarly, the curves corresponding to the effect modification by temperature levels indicated that in the lower level of the pollutant distribution the slope corresponding to lower temperature was steeper and in the higher level of the pollutant the slope corresponding to higher temperature was steeper. The effect modification pattern of the standardized mortality rate indicated a steeper slope for higher rates.When we investigated the heterogeneity in the BS\u2013respiratory mortality association, the curvature observed in the lower end of the overall exposure\u2013response curve of PMortality was alsoIn recent years there has been growing demand from policy makers for better understanding of the exposure\u2013response relationship between air pollution and various adverse health effects, including mortality. Most of the relevant studies in Europe were carried out within a small number of locations and consequently have limited statistical power to provide evidence in support of a particular model. We used the most extensive database available in Europe until today to inves10 and BS with total and cardiovascular mortality are roughly linear, consistent with the absence of a threshold. The curve for respiratory mortality suggests that there is some evidence for deviation from linearity in the lowest levels of the pollutants distribution.We used cubic splines to estimate nonlinear relations of particulate air pollution with mortality. Our results and 3 inThere was significant heterogeneity in all associations under investigation. However, the chi-square test applied for the investigation of heterogeneity has very high power when many studies are included in the meta-analysis, and especially when these studies are large, as in our case . Formal It is well understood that the measured particle indicators represent a mixture, with varying chemical and physical characteristics, reflected on different toxicity of parts of this mixture. Similarly, the populations studied in our analysis consist of subgroups with different sensitivity to PM exposure. It is likely that the exposure profile and sensitivity of each subgroup result in various thresholds of effects that cannot be identified with this methodology. The linear curve resulting from our analysis may be seen as a composition of these postulated \u201cpartial\u201d curves and may be used effectively for the protection of the whole population. Clearly, more research is needed to identify the most dangerous components of the PM mixture and the most sensitive population subgroups. On the other hand, the biologic mechanisms underlying the PM\u2013health outcome associations are not yet completely clear.10 measurements represent all particles with aerodynamic diameter < 10 \u03bcm, a mixture of primary and secondary particles from different sources with varying characteristics and levels of toxicity. Unfortunately, the present study does not have enough information to sufficiently investigate this possibility.The curvature of the exposure\u2013response relationship between ambient particles and respiratory mortality in the lower levels of the pollutants, not so strongly observed for total and cardiovascular mortality, suggests that there may be different mechanisms underlying the association of particulate pollution exposure to different mortality health outcomes. 3 (and > 10 \u03bcg/m3 where there is enough information). This is consistent from the results from 10 U.S. cities analyzed by Nevertheless, in the range of the pollutants common to all the cities included in the analyses, all associations were approximately linear. The above results are consistent with those reported in previous studies in Europe and in tFormal comparison between threshold and linear models, based either on the AIC or on the deviance chi-square test, showed that linear models would on average fit better than the threshold ones.2 (24-hr) levels, and lower standardized mortality rates. The effect of NO2 suggests that particles originating from vehicle exhausts are more toxic than those from other sources. A possible explanation for the temperature effect on the exposure\u2013response association may be that in warmer countries, outdoor fixed-site air pollution measurements may represent the average population exposure better than the measurements in colder climates, because people tend to keep their windows open and spend more time outdoors in warmer climates. Finally, in this study a large age-standardized mortality rate was related to a smaller proportion of elderly persons and probably to the presence of competing risks for the same disease entities. It is therefore related to a smaller proportion of people belonging to vulnerable groups who are more susceptible to air pollution effects. The above-reported effect modification patterns are in accordance with the corresponding ones when a linear pollutant\u2013mortality association was assumed levels present steeper slopes.When we investigated the relation with respiratory mortality, the exposure\u2013response curves were steeper in Eastern European cities. The effect modification patterns between ambient particles and respiratory mortality are less clear and need further investigation. In the range of the pollutants common in all analyzed cities, the exposure\u2013response curves are steeper in eastern European cities. Also, in cities with higher standardized mortality rates, the slopes were steeper. These findings supplement each other, because in the cities included in our analysis, all eastern cities had high standardized mortality rates. The effect on the particles\u2013respiratory mortality association of the remaining potential effect modifiers investigated is analogous to the ones observed in the cases of total and cardiovascular mortality. Namely, in the range of the pollutants most commonly observed, cities with higher temperatures and mean NO3, if true, is likely to reflect differences in the mixture and toxicity at different levels. Further study focusing on the composition of particles is needed to further our understanding of the etiologic mechanism through which particles affect mortality and particularly respiratory mortality.In conclusion, the association between ambient particles and mortality in the cities included in the present analysis could be adequately estimated using the linear model. Our results confirm those previously reported from Europe and the United States. The heterogeneity found in the different city-specific relations reflects real effect modification, which can be explained partly by factors characterizing the air pollution mix, climate, and the health of the population. Hence, measures that focus on lowering air pollution concentrations have greater public health benefits than those that focus on a few days with the highest concentrations . The ten"} +{"text": "Studies of the effects of air pollutants on birth weight often assess exposure with networks of permanent air quality monitoring stations (AQMSs), which have a poor spatial resolution.We aimed to compare the exposure model based on the nearest AQMS and a temporally adjusted geostatistical (TAG) model with a finer spatial resolution, for use in pregnancy studies.2) levels and of their association with birth weight.The AQMS and TAG exposure models were implemented in two areas surrounding medium-size cities in which 776 pregnant women were followed as part of the EDEN mother\u2013child cohort. The exposure models were compared in terms of estimated nitrogen dioxide , \u221275 to 1 g] for the nearest-AQMS model and of \u221251 g for the TAG model. The association was less strong for women living within 5 or 1 km of an AQMS.The correlations between the two estimates of exposure during the first trimester of pregnancy were The two exposure models tended to give consistent results in terms of association with birth weight, despite the moderate concordance between exposure estimates. We compared these models in terms of estimated NOThis study was conducted in a subgroup of the French EDEN mother\u2013child cohort. Pregnant women at < 26 weeks of gestation were recruited from the maternity wards of Poitiers and Nancy university hospitals (France) between September 2003 and January 2006. Gestational age was assessed from the date of the last menstrual period . Exclusi2 around Nancy and the other of 315 km2 around Poitiers, in which air quality measurement campaigns have been conducted. We then further restricted the study area to the immediate vicinity of an AQMS, focusing on circular buffers with a radius of 5, 2, and 1 km around each AQMS , to obtain our exposure estimate E1ij,\u0394t.We obtained air pollution data from the Airlor (Nancy) and Atmo-Poitou-Charentes (Atmo-PC)(Poitiers) AQMS networks. All permanent AQMS measuring NOcy area) , excludicy area) or indus2 measurement campaigns with a Palmes diffusive sampler .NO sampler were con sampler . In each sampler on a 50 sampler . This co2 concentrations were then combined with time-specific measurements from the permanent AQMS to capture temporal variations in concentrations. This approach has previously been used in the context of land use regression (LUR) models and also over the year in which the measurement campaign was performed . The ratioThe estimated annual NO) models . The hou2 exposure E2i\u0394t for woman i was the product of the spatial and temporal components, orwas the temporal component of the model. The temporally adjusted estimate of NOFor each model, we assessed the relative contribution of spatial variations in exposure contrasts by Pearson\u2019s correlation coefficient between the exposure estimate and its spatial component. We also carried out variance decomposition. The nearest-AQMS model could be broken down as the mean level of exposure of all women during the time window \u0394t, and Sij the NO2 concentration at AQMS j averaged over the entire study period, so as to obtain a spatial component dependent solely on the address of the woman. This corresponded to our estimate of the spatial component of the AQMS model; E1ij, \u0394t \u2013 Sij corresponded to our estimate of the temporal component of the model. The TAG model was log-transformed and expressed aswith for the variance analysis. These analyses were restricted to women who did not change address during pregnancy.r). The distributions of the exposures estimated by the nearest-AQMS model and by the TAG model were plotted as a function of the AQMS closest to woman\u2019s home address, with and without excluding the AQMS located in the city center. We also assessed the concordance between the estimates generated by the two models, classified into tertiles, by determining percentage concordance and the \u03ba coefficient. Bland\u2013Altman plots were used to estimate the magnitude of the systematic error between the two exposure models . Mean birth weight was 3,284 g . 2 were higher in Nancy than in Poitiers, whatever the exposure model and exposure window considered than with its temporal component . For both models, exposure estimates throughout pregnancy were subject to strong spatial variation , rather than in the periurban areas. Indeed, the exposure distributions for the two models became more similar when we did not take into account city-center AQMS measurements between the two exposure models were fair (0.40\u20130.74) when we considered all the women living within 5 km of an AQMS . For the AQMS model, the parameter quantifying the association between NO2 exposure and birth weight approached zero as buffer size increased. We obtained similar results if we made no adjustment for city center (data not shown).The patterns of association with birth weight identified were similar for the two exposure models, in terms of estimates of adjusted effects and confidence intervals (CIs), although these associations were stronger for the nearest-AQMS model [2 exposure assessed with a TAG model and birth weight, and to compare this model with the more commonly used approach based on permanent AQMSs. We compared models in terms of both exposure estimates and association with birth weight. The nearest-AQMS model was influenced by the location of monitors. Variations in exposure were mostly attributable to spatial rather than temporal variations in both models, with temporal variation making a larger overall contribution to total variation in the TAG model than in the nearest-AQMS model. The concordance between NO2 exposure estimates with the two models was fair when we considered the 5-km buffer. This concordance was stronger if we restricted the analysis to women living closer to an AQMS. When we coded exposure as a continuous term, associations with birth weight for the TAG model were consistent with those obtained in analyses based on exposure estimated from the nearest-AQMS model, for the various buffers around AQMS and exposure windows.Our study is one of the first to describe associations between NOThe TAG model is thought to have a better spatial resolution than the nearest-AQMS model, because of the use of data from fine measurement campaigns, with no loss of temporal resolution, because we seasonalized TAG exposure estimates on the basis of AQMS measurements. The stronger contribution of the spatial component in the nearest-AQMS model than in the TAG model may at first glance appear counterintuitive, because the AQMS model could be considered to be essentially based on temporal variations. However, this finding may be accounted for by the considerable variation of the concentrations obtained with different AQMSs, some of which (in the city center) were influenced by traffic, despite meeting the criteria for background stations. This illustrates the extent to which the nearest-AQMS estimates depend on the location of the monitors, and the need for exposure models with a finer spatial resolution in studies with medium- or long-term exposure windows (3\u20139 months in our study). Because passive samplers were located at background sites less affected by traffic, the TAG approach led to a more purely background model than did the AQMS approach. The higher concentrations estimated by the nearest-AQMS model than by the TAG model may be aOne possible limitation of the TAG model stems from the approach used to seasonalize this model, in which we assumed that spatial differences in exposure remained constant over time. This assumption was found to be reasonable for a LUR model developed in Rome but may r = 0.61, \u03ba = 0.42) or a dispersion model . The concordance obtained with the LUR model was similar to that observed in our study with the TAG model for a 5-km buffer around the AQMS. However, Marshall et al.\u2019s study is not directly comparable with ours because they used a larger buffer zone (10 km) and because the LUR and dispersion models incorporated all local sources of pollution, whereas our TAG model did not.Several studies have evaluated the performance of AQMS for estimating exposure to air pollutants. 2 concentration and birth weight. We obtained higher levels of concordance between the models if we focused on women living within 2 km of a monitor, and higher still for women living within 1 km of a monitor. Associations between NO2 levels and birth weight, although not statistically significant at the 5% level, tended to be stronger for the 2-km buffer around the AQMS than for the 5-km buffer from a monitor . Our resm buffer . The fin2 have reported larger decreases in birth weight for exposure in the first and third trimesters of pregnancy (Most previous studies considering the effects of NOregnancy than in regnancy . We obseregnancy , but non2 exposure and fetal growth when they used an AQMS-based approach, but no association when they used an LUR model. They considered women living up to 10 km away from an AQMS, and the AQMS-based model corresponded to an inverse-distance weighting index, taking into account the three closest stations within 50 km.It is generally difficult to predict the impact of an error in an exposure variable in terms of the potential for bias in the exposure\u2013response relationship . However2 concentrations based on data from the nearest AQMS may entail large errors in estimated exposure, but that in some instances these errors have little impact on the exposure\u2013birth weight relationship. The amplitude of exposure misclassification in AQMS-based models and of the resulting bias may be limited by restricting the size of the study area around each AQMS considered. Full quantification of the exposure error for each model would require consideration of the temporal and spatial activities of each subject. Our study cannot be interpreted as providing clear evidence that the nearest-AQMS approach yields unbiased estimates of the association between NO2 concentrations and fetal growth. This question requires further consideration in other cohorts and in other countries, in which the siting of permanent monitors may follow different rules.Our study indicates that models of exposure to background NO"} +{"text": "The objective of most biomedical research is to determine an unbiased estimate of effect for an exposure on an outcome, i.e. to make causal inferences about the exposure. Recent developments in epidemiology have shown that traditional methods of identifying confounding and adjusting for confounding may be inadequate.The traditional methods of adjusting for \"potential confounders\" may introduce conditional associations and bias rather than minimize it. Although previous published articles have discussed the role of the causal directed acyclic graph approach (DAGs) with respect to confounding, many clinical problems require complicated DAGs and therefore investigators may continue to use traditional practices because they do not have the tools necessary to properly use the DAG approach. The purpose of this manuscript is to demonstrate a simple 6-step approach to the use of DAGs, and also to explain why the method works from a conceptual point of view.Using the simple 6-step DAG approach to confounding and selection bias discussed is likely to reduce the degree of bias for the effect estimate in the chosen statistical model. The objective of most biomedical research, whether experimental or observational, is to predict what will happen to an outcome if the treatment is applied to a group of individuals or if a harmful exposure is removed. In other words, the clinician/policy maker is interested in making causal inferences from the results of a study. The purpose of this manuscript is to demonstrate a simple 6-step algorithm for determining whether a proposed set of covariates would reduce possible sources of bias when assessing the total causal effect of a treatment on an outcome.There are many nuances to the definition of cause. For the purposes of this manuscript, we define it in counterfactual terms: \"Had the exposure differed, the outcome would have differed\", where exposure or outcome may be dichotomous or continuous . Further refinements into sufficient, complementary and necessary causes are impoThere are many features of a study that can lead to inappropriate causal inference. For the purposes of this discussion, we assume \"ideal\" processes for the study . Under \"ideal\" conditions, inappropriate causal inferences (i.e. biases) are more likely to occur in observational studies compared to randomized trials because some subjects may be exposed to a treatment for a condition specifically because of personal factors that are related to prognosis figure . Under tThe traditional approach to confounding is to 'adjust for it' by including certain covariates in a multiple regression model (or by stratification). One common practice is to consider a covariate to be a confounder (and \"adjust\" for it) if it is associated with the exposure, associated with outcome, and changes the effect estimate when included in the model. According to standard textbooks, additional criteria also need to be applied and the covariate should not be affected by exposure and needs to be an independent cause of the outcome . HoweverOne method to help understand whether bias is potentially reduced or increased when conditioning on covariates is the graphical representation of causal effects between variables. In the causal directed acyclic graph (DAG) approach, an arrow connecting two variables indicates causation; variables with no direct causal association are left unconnected. Therefore the bi-directional arrows in figure Although other articles have previously described the DAG approach to confounding ,12,13, tBy applying the following simple 6-step process correctly, we will show how including only 2 covariates in a complicated causal diagram figure is likel1) and tissue weakness (Z2) would minimize bias in the estimate of the effect of warming up on injury if this is the true causal diagram. We will later discuss how to approach the more general problem when multiple causal diagrams are possible. As with any analytic approach to bias in an observational study (including the one below), we must make some assumptions regarding how variables are causally related to each other; we seek to determine whether our analytic approach would succeed under these assumptions. The algorithm we describe below only works if the DAGs are drawn so that they include all variables that cause two or more other variables shown in the DAG )The DAG approach is not a statistical technique that yields an estimate of effect. However, it will allow users of traditional stratification and regression techniques to reduce the magnitude of the bias in the estimate. Although researchers should generally not adjust for a covariate (or a marker for a covariate) that lies along a causal pathway when assessing the total causal effect, this may not be the case for researchers interested in decomposing total causal effects into direct and indirect effects. In these cases, one may sometimes need to include covariates that lie along the causal path, but this is a process that needs to be carefully thought out or incorrect inferences may occur ,27. We aThe traditional approach to confounding bias by determining only associations and avoiding discussions related to causation is problematic and has led to inappropriate data analysis and interpretation ,13. The The authors declare that they have no competing interests.A short summary of the Six-Step Process Towards Unbiased EstimatesStep 1. The covariates chosen to reduce bias should not be descendants of XStep 2. Delete all variables that satisfy all the following criteria: 1) non-ancestors of X, 2) non-ancestors of the outcome and 3) non-ancestors of the covariates that one is including in the model to reduce bias.Step 3. Delete all lines emanating from X.Step 4. Connect any two parents sharing a common child.Step 5. Strip all arrowheads from lines.Step 6. Delete all lines between the covariates in the model and any other covariatesInterpretation: If X is dissociated from the outcome after Step 6, then the statistical model chosen (i.e. one that includes only the chosen covariates) minimizes the bias of the estimate of X on the chosen outcome.Parent: A parent is a direct cause of a particular variable.1. Ancestor: An ancestor is a direct cause (i.e. parent) or indirect cause (e.g. grandparent) of a particular variable.2. Child: A child is the direct effect of a particular variable, i.e. the child is a direct effect of the parent.3. Descendant: A descendant is a direct effect (i.e. child) or indirect effect (e.g. grandchild) of a particular variable.4. Common Cause: A common cause is covariate that is an ancestor of two other covariates.1. Common Effect : A common effect is a covariate that is a descendant of two other covariates. The term collider is used because the two arrows from the parents \"collide\" at the node of the descendant.2. Conditioning: Conditioning on a variable means that one has used either sample restriction or stratification/regression to examine the association of exposure and outcome within levels of the conditioned variable. Other terms often used such as \"adjusting for\" or \"controlling for\" suggest an interpretation of the statistical model that is sometimes misleading and therefore we prefer the word conditioning.3. Unconditional Association: If knowing the value of one covariate provides information on the value of the other covariate without conditioning on any other variable, the two variables are said to be unconditionally associated. This is also known as marginal statistical dependence and its absence as marginal statistical independence.4. Conditional Association: If knowing the value of one covariate provides information on the value of the other covariate after conditioning on one or more covariates (i.e. within any level of the conditioned covariate(s)), the two variables are said to be conditionally associated. This is also known as conditional statistical dependence and its absence as conditional statistical independence.5. Confounding bias: occurs when there is a common cause of the exposure and outcome that is not \"blocked\" by conditioning on other specific covariates.1. Selection bias: occurs when one conditions on a common effect such that there is now a conditional association between the exposure and the outcome.2. Both IS and RWP contributed to the development of ideas and the writing of the manuscript. All authors have read and approved the final manuscript.The pre-publication history for this paper can be accessed here:"} +{"text": "We suggest that the need to account for systematic error may explain the apparent lack of agreement among studies of maternal dietary methylmercury exposure and neuropsychological testing outcomes in children, a topic of ongoing debate.These sensitivity analyses address the possible role of systematic error on reported associations between low-level prenatal exposure to methylmercury and neuropsychological test results in two well known, but apparently conflicting cohort studies: the Faroe Islands Study (FIS) and the Seychelles Child Development Study (SCDS). We estimated the potential impact of confounding, selection bias, and information bias on reported results in these studies using the Boston Naming Test (BNT) score as the outcome variable.Our findings indicate that, assuming various degrees of bias (in either direction) the corrected regression coefficients largely overlap. Thus, the reported effects in the two studies are not necessarily different from each other.in utero methylmercury exposure at levels reported in the FIS and SCDS.Based on our sensitivity analysis results, it is not possible to draw definitive conclusions about the presence or absence of neurodevelopmental effects due to The potential effect of children's low-level exposure to methylmercury in the environment is a complex research issue that continues to receive considerable attention from researchers, government agencies, and the public . The US This lack of consistency among studies and particularly the discrepancy between the Seychelles Child Development Study (SCDS) and the Faroe Islands Studies (FIS) was noted in several previous publications ,9. HowevCurrent methodological literature emphasizes the importance of estimating, as opposed to merely acknowledging (or dismissing), the potential role of unaccounted systematic error in observational epidemiology -31 and iWe used the score of the Boston Naming Test (BNT) as the outcome variable because it seems to have received substantial attention as an endpoint of interest (NRC 2000) and because both the SCDS and the FIS have used it in their analyses. The other cohort study, conducted in New Zealand ,5,37, diOur evaluation of the FIS and SCDS included two components: a qualitative review and comparison of the methods and results, and a quantitative analysis of selected sources of systematic error. The qualitative review evaluated the FIS and SCDS study methods with respect to their target population, selection of participants, exposure assessment, outcome ascertainment and data analyses. Particular attention was paid to identification of potential sources of systematic error, which were then evaluated in quantitative analyses.The quantitative analyses presented in this article are conceptually similar to those described in our earlier publication and invo0 + \u03b21X + \u03b5 represents the relation between outcome (Y) and methylmercury exposure (X), or some transformation of these , then the least square estimate of the regression parameter \u03b21 based on a sample of n observations is:In general terms, if a linear regression model Y = \u03b2obs-b). It is important to keep in mind that the sensitivity analyses presented here do not address the impact of systematic error on the epidemiologic measure of association between methyl-mercury exposure and neuropsychological testing, but rather its impact on a regression coefficient in a given study. The actual measure of association can be further affected by the model assumptions, which are beyond the scope of this paper.For a systematic error of certain magnitude, it is possible to estimate the corrected linear regression coefficient by accounting for this error. The impact of systematic error can also be expressed as the difference between the observed and the corrected regression coefficients because both the SCDS and the FIS used it in their analyses. The BNT is a 60-item test that asks the examinee to provide the name of an object depicted in black-and-white line drawings. The response that is judged to be correct and the amount of time to respond are recorded. The test can be administered with or without cues. Semantic cues, if used, are provided if no response is made within 20 seconds. If the examinee is still unable to produce the name, a phonemic cue may be provided. The total score is then the number of items correctly named spontaneously or after cues. For the Seychelles study, a score of 43 was considered normal (standard deviation of 5) . Scores The possible effect of unadjusted confounding on FIS and SCDS results was assessed by measuring the impact of potentially important covariates not considered in these studies. To estimate the impact of selection bias, we calculated the difference in BNT results that would be observed in the FIS and SCDS assuming that the distributions of exposure and BNT scores among persons omitted from these studies were different than the analogous distributions among study participants. Finally, the potential role of information bias was quantified for a given range of outcome misclassification (in either direction) differentially affecting the low exposure and the high exposure groups in each study. The derivation of the corrected linear regression estimate (b) for each specific type of systematic error was conducted as follows.r) for 2 variables, Z and Y, can be expressed as:Given the mathematical relationship between estimates of regression coefficients and correlation coefficients, one can use reported estimated correlation coefficients to calculate the potential impact of confounders. The correlation coefficient to confounder Z. If we assume that the same regression model applies to the exposed and non-exposed populations, then:where which becomeswhere:Exp is the mean value of the outcome measure among the exposed;Non-exp is the mean value of the outcome measure among the non-exposed;sY is the standard deviation of the outcome measure;Exp is the mean value of the potential confounder among the exposed;Non-exp is the mean value of the potential confounder among the non-exposed;sZ is the standard deviation of the potential confounder;andr is the Pearson correlation coefficient for variables Z and Y.0 + \u03b21X + \u03b22Z + \u03b5 represent the relation between outcome (Y) and exposure (X) in the presence of an unaccounted confounder (Z). From the formula above, the regression parameter \u03b21 corrected for unaccounted confounding can be estimated as:Let a multiple linear regression model Y = \u03b2X and sY are estimates of the standard deviations of X and Y, and r(XY), r(XZ) and r(ZY) represent estimates of the correlation coefficients between X and Y, X and Z, and Z and Y, respectively. If we use formula (1) to express bobs, that is the estimate of the regression parameter unadjusted for the effect of confounding, then the difference (bobs-bconf) in this case represents the impact of confounding by Z on the observed linear regression coefficient.where sall eligible subjects. Let:Selection bias may occur if the participants are systematically different from persons not included in the study with respect to their exposure and outcome levels. Thus, the regression slope derived from the data collected among the participants would differ from the estimate based on \u2022 n represent the total number of all eligible subjects;s (ps) represent the number (proportion) of sampled subjects among the n eligible subjects;\u2022 nn (pn) represent the number (proportion) of non-sampled subjects among the n eligible subjects;\u2022 ns and s represent the estimates of the mean exposure and outcome among the sampled subjects;\u2022 n and n represent the estimates of the mean exposure and outcome among the non-sampled subjects;\u2022 Xs and sXn represent the estimates of the standard deviation of the exposure levels among the sampled and non-sampled subjects, respectively ;\u2022 ss represent the estimate of the regression parameter derived using the data from the ns sampled subjects;\u2022 bn represent the estimate of the regression parameter for the nn non-sampled subjects, assumed here to be a multiple of bs, that is bn = \u03bdbs;\u2022 bsel represent the estimate of the corrected regression parameter based on all eligible subjects.\u2022 bThen:sYs and \u2211Xs2 corresponding to the sampled subjects are easily derivable by substituting the estimates of ns, bs, s and s available for the sampled subjects in standard computational formulas for the variance and linear regression parameter, to give:where the estimates of \u2211XnYn and \u2211Xn2 corresponding to the non-sampled subjects:Similarly, the estimates of \u2211Xcan be estimated by substituting the hypothetical (assumed) estimates for the non-sampled subjects.obs-bsel) in this case represents the impact of selection bias on the observed linear regression slope.Thus (bobs) for a proportion of the subjects is different from the \"true\" outcome (Y). We assume that the absolute amount of over or underestimation in the observed outcome for a subject with exposure X is proportional to the difference between X and In this study we assessed the impact of one type of information bias , which may occur when the data about the outcome are obtained differently for subjects in different exposure categories. Thus, the reported (or \"observed\") outcome above, derived using Yobs.\u2022 btrue = Yobs -a1(X-1) of all subjects, and Ytrue = Yobs +a2(X-2) of all subjects, while Ytrue = Yobs for the remaining subjects.Thus, Yinf is given by:An estimate of the regression parameter (adjusted for information bias) btrue in the first term in the numerator of equation 7, we get:Substituting the expressions for Ywhere:true in the second term in the numerator of equation 7, we get:Similarly, substituting the expressions for Yinf becomes:Combining (8) and (9), the numerator of b1 and p2 of subjects defined above are random subsamples of all X's, then, the second and third terms in equation (10) above become:If we assume that the exposure values (X) corresponding to the fractions p1)(a1) - (p2)(a2), represents the magnitude of information bias (bobs-binf).thus, .To examine the aggregate uncertainty that results from a combination of random error and three types of systematic error , we used Monte Carlo simulations that included 50,000 randomly selected scenarios (Steenland and Greenland 2004). The observed distributions for FIS and SCDS were derived based on slope factors and corresponding confidence intervals reported in the original studies ,39. The e.g., selenium, polyunsaturated fatty acids) in either study [e.g., the child's anxiety level), which have been associated with performance on the WISC III Digit Spans subtest [e.g., quality of school/teachers); paternal intelligence; parental education; exposure to other chemicals that have been associated with neurobehavioral effects ; as well as dietary components, such as selenium and omega-3 fatty acids, which are expected to have a beneficial effect on neurodevelopment [Despite rather lengthy lists of covariates that were considered in each study, the possibility remains of confounding due to unmeasured covariates or due to residual confounding. For example, no data were collected on nutritional factors or maternal (FIS) intelligence by the Raven's Progressive Matrices test rather than using a comprehensive test, such as the Wechsler Adult Intelligence Scale (WAIS). Raven's Progressive Matrices measures nonverbal reasoning ability and is a useful test for those who do not speak English. Its correlation with other intelligence tests ranges from 0.5\u20130.8 .Participants in the Faroe Islands study were recruited among 1,386 children from three hospitals in Torshavn, Klaksvik, and Suderoy between March 1, 1986 and December 31, 1987 . Blood sNine hundred seventeen of the 1,022 children returned for neuropsychological testing at approximately age seven . Scores et al. (1995) reported birth characteristics for SCDS participants and the target population and found small, non-significant differences in birth weight, gestational age, male:female ratio, and maternal age between the two groups [The 740 infant-mother pairs who remained in the cohort-for-analysis in the SCDS after exclusions represent approximately 50% of the target population . The auto groups . Six hunApproximately half of all FIS participants underwent testing in the morning and half underwent testing in the afternoon. Most children were examined in Torshavn. If the time of testing or the need to travel before testing were related to exposure, this could have introduced additional bias due to diurnal variation and/or fatigue. According to the Faroese transportation guide, long-distance bus service combined with the ferry services, links virtually every corner of the country. However, it appears that a trip to Torshavn may take up to several hours . Some ofThe methods description does not indicate whether or not investigators administering the test were blinded with respect to the participants' exposure status. According to the study authors, the participation rate in the capital was lower and the participants' geometric mean mercury concentration was about 28% higher (~23 \u03bcg/L vs. ~18 \u03bcg/L) than that of non-participants. This may indicate that residence was related to both exposure level and the need to travel, as well as to the AM/PM testing status.A re-analysis of the FIS data showed that, after controlling for residence (town vs. country), the linear regression slope for BNT without cues changed from -1.77 (p < 0.001) to -1.51 (p = 0.003), whereas the slope for BNT with cues changed from -1.91 (p < 0.001) to -1.60 (p = 0.001) . HoweverSimilar concerns, although to a lesser extent, apply to the SCDS results. The testing was performed \"mostly in the morning.\" This does not exclude the potential impact of diurnal variation on the results; however, this impact would have been probably lower than that in the FIS, where the AM/PM testing ratio was 1:1.All testing for SCDS was performed on Mahe. Some families apparently had to travel to the testing site. Similarly to the FIS, it is possible that children who had to travel were more tired prior to testing. However, one of the criteria for inclusion into the main study was Mahe residence and prolonged travel does not appear likely as Mahe extends 27 km north to south and 11 km east to west . The SCDThe results of the sensitivity analyses evaluating the potential impact of systematic error on the association between measures of methylmercury exposure and BNT scores are presented in Tables obs = -0.019 to bconf = +0.085 (Scenario 7). In the SCDS analyses, the same range of correlation coefficients would produce a corresponding range of corrected linear regression slopes between -0.58 (Scenario 8) and 0.55 (Scenario 7).When evaluating the possible role of unmeasured confounders in the FIS and SCDS analyses, we assumed that the correlation coefficient between confounder and exposure ranged from -0.5 to +0.5 and the correlation coefficient between confounder and outcome (BNT score) ranged from 0.2 to 0.8. The results are presented in Table obs \u00d7 2), the corrected slope for FIS may range between -0.027 (Scenario 4) and -0.009 (Scenario 7). The same selection bias scenarios in the SCDS would result in a change of direction from -0.012 to +0.017 (Scenario 7) or in a stronger than observed association, with a regression slope of -0.037 (Scenario 6).Table e.g., 10%) and the relatively modest magnitude of misclassification (a1 and a2 between 0.1 and 0.4). For the eight scenarios presented in Table The analyses of information bias demonstrated the effect on study results with a relatively small proportion of misclassified participants did both studies report statistically significant inverse associations between test scores and methylmercury exposure, but those associations were not consistent. In the SCDS, the association was for the \"non-dominant\" hand grooved pegboard test among males only, whereas the FIS reported the association for the \"preferred\" hand finger tapping.The proposed interpretations of the observed disagreement between the two studies have been based primarily on the assumption that the differences in results have an underlying biological explanation. Recent reviews paid substantial attention to the fact that the two studies reported their main findings using different measures of methylmercury exposure: cord blood versus maternal hair ,10. As cPrior to the publication of the most recent SCDS update, it appeared plausible that the differences between the two study results could also be explained by the lack of comparability in the neuropsychological test batteries. However, the last testing of the SCDS participants included many of the same tests previously used by the FIS investigators \u2013 specifically, those with significant findings \u2013 and the above explanation no longer appears likely.Our analyses indicate that each of the potential sources of systematic error under certain conditions is capable of changing the results from significant to non-significant and vice versa. Moreover, under some scenarios even the direction of the observed associations can be reversed. Although the scenarios in our sensitivity analyses cover a wide range of assumptions, they are not entirely hypothetical. The differences in exposure levels between participants and non-participants in the FIS have been reported ,45 and, For all of the above reasons, the uncertainty around the FIS and the SCDS regression slope estimates is probably larger than is suggested by the reported 95% confidence intervals. The discrepant results of the two studies may, in fact, fall within an expected range and departures from null in either direction can be explained by a combination of random and systematic error.e.g., lack of data on the correlation between confounder and exposure) and have to rely on hypothetical distributions of the parameters of interest. When no data were available, we assumed a uniform distribution in the Monte Carlo analyses. We recognize that the uniform distribution may not accurately reflect the uncertainty since all values within the range are given equal probabilities. In the future, alternative approaches such as the use of triangular or beta distributions, which give more weight to the more \"probable\" values, may need to be explored. The assumptions of normal distribution and independence of various sources of bias also need to be considered and alternative analytical methods for circumstances that do not fit these assumptions may need to be developed. For example, our adjustment for unmeasured confounders does not condition on the variables for which adjustment was made. It is important to point out that adjusting for the measured covariates may reduce the residual confounding attributable to the unmeasured confounder. All of the above considerations may affect the results of sensitivity analyses; however, in the absence of sensitivity analyses, one implicitly assumes that systematic error had no effect on study results, an assumption that may be even more difficult to defend.The interpretation of sensitivity analyses presented here, just like the interpretation of any epidemiological analyses, requires careful consideration of caveats and underlying assumptions. Many sensitivity analyses, including ours, are limited by insufficient information (In summary, despite caveats, we feel that our analyses served their purpose of illustrating the proposed methodology. We conclude that sensitivity analyses serve as an important tool in understanding the sources of such disagreement as long as the underlying assumptions are clearly stated. It is important to recognize that disagreement across studies is one of the unavoidable features of observational epidemiology."} +{"text": "During the routine serial passage of over 30 human tumour xenografts in athymic (nu. nu.) mice over a period of 6 years the induction of murine fibrosarcomas at the site of transplantation has been observed on three occasions. In two cases it has been possible to follow the development of these tumours over successive transplant generations. These sarcomas had growth rates, tumour karyotypes and isoenzyme patterns which clearly distinguished them from the original human xenografts."} +{"text": "P < 0.05) for Group 1; \u221221.02 \u00b1 1.63% (P < 0.001) for Group 2; \u221212.47 \u00b1 1.75% (P < 0.001) for Group 3; and \u221222 \u00b1 2.19% (P < 0.001) for Group 4 with significant intergroup difference . No significant increase in AST, ALT, and CPK levels was observed in all groups. Our results indicate that MD alone is effective in reducing LDL cholesterol levels in statin-intolerant patients with a presumably low cardiovascular risk, but associating MD with the administration of RYR improves patients' LDL cholesterol levels more, and in patients with type 2 diabetes.Lipid profile could be modified by Mediterranean diet (MD) and by red yeast rice (RYR). We assessed the lipid-lowering effects of MD alone or in combination with RYR on dyslipidemic statin-intolerant subjects, with or without type 2 diabetes, for 24 weeks. We evaluated the low-density lipoprotein (LDL) cholesterol level, total cholesterol (TC), high-density lipoprotein (HDL) cholesterol, triglyceride, liver enzyme, and creatinine phosphokinase (CPK) levels. We studied 171 patients: 46 type 2 diabetic patients treated with MD alone (Group 1), 44 type 2 diabetic patients treated with MD associated with RYR (Group 2), 38 dyslipidemic patients treated with MD alone (Group 3), and 43 dyslipidemic patients treated with MD plus RYR (Group 4). The mean percentage changes in LDL cholesterol from the baseline were \u22127.34 \u00b1 3.14% ( The prevalence of metabolic syndromes (MS) and the associated cardiovascular diseases (CVDs) is increasing rapidly around the world. Lifestyle measures, including dietary changes and physical activity, play a crucial role in preventing these conditions, and the National Cholesterol Education Program Adult Treatment Panel III (NCEP ATP III) has already suggested dietary intervention to contain this epidemic .Cardiovascular risk factors in MS could be modified by dietary interventions. The Mediterranean diet (MD) is characterized by a high consumption of monounsaturated fatty acids (primarily from olive oil) and a daily consumption of fruit, vegetables, whole-grain cereals, and low-fat dairy products; weekly consumption of fish, poultry, tree nuts, and legumes; a relatively low consumption of red meats (approximately twice a month) . The benSome studies have shown that red yeast rice (RYR) reduces low-density lipoprotein (LDL) cholesterol levels in hypercholesterolemic patients , 8. Beck10 coenzyme reportedly induced a significant metabolic improvement in elderly patients with dyslipidemia [A combination of RYR extract, policosanol, berberine, folic acid, and Q10 coenzyme in addition to dietary counseling was found to amplify the effect of diet on central obesity, improve lipid profiles and blood pressure, and reduce the incidence of MS [In nondiabetic patients with dyslipidemia, a combination of RYR, policosanol, berberine, folic acid, and Qce of MS .10 to the Mediterranean diet could improve the lipid profile of dyslipidemic patients with and without type 2 diabetes.To our knowledge, little is known about the efficacy of RYR extract in patients with type 2 diabetes. In particular, no data are available on the effect of RYR supplementation combined with a Mediterranean diet on the lipid profiles of such patients. Hence, this randomized, parallel-group controlled study lasting six months to investigate whether adding a combination of RYR extract, artichoke extract, resveratrol, chrome, folic acid, and coenzyme QThis study was designed as a controlled, randomized, parallel-group study and complied with the content of the Helsinki Declaration. The Local Institutional Review Board approved the study protocol and all participants provided written informed consent.We studied consecutive patients attending our outpatient clinic from January to October 2010. Patients were included in the study if they had total cholesterol levels higher than 200\u2009mg/dL and/or low-density lipoprotein (LDL) cholesterol levels higher than 130\u2009mg/dL and a cardiovascular risk (as assessed according to the Progetto Cuore) of \u226410% for dyslipidemic patients and \u226415% for diabetic patients and if tThe sample of 171 eligible participants included 90 type 2 diabetic patients with dyslipidemia and 81 dyslipidemic patients without type 2 diabetes.10 10\u2009mg ; Group 3 consisted of 38 dyslipidemic patients treated with the MD alone; Group 4 consisted of 43 dyslipidemic patients treated with the MD plus the NCP. Adherence to the medication was ascertained by means of pill counts on the study medication returned at follow-up visit.At the baseline visit, we used 24-hour recall, a self-reporting method for collecting data on eating behavior and measuring energy intake by means of structured interviews, as described elsewhere . A MeditThe primary outcome was the low-density lipoprotein (LDL) cholesterol level measured at the baseline and after 24 weeks. Secondary outcomes included total cholesterol (TC), high-density lipoprotein (HDL) cholesterol, triglyceride, liver enzyme, and creatinine phosphokinase (CPK) levels.Patients attended follow-up visits after 24 weeks of treatment. None of the patients dropped out of the study.At each visit, all patients were assessed in terms of body mass index (BMI), diastolic and systolic blood pressure , and waist circumference (midway between the lowest rib and the iliac crest).A fasting blood sample was obtained at the baseline and at week 24 to measure LDL cholesterol, TC, HDL cholesterol, triglyceride, CPK, aspartate aminotransferase (AST), and alanine aminotransferase (ALT) levels. All analyses were performed at the laboratory of the University Hospital of Padua, Italy.At weeks 24, patients' dietary compliance was assessed using the 24-hour recall method and an apost hoc test was used to identify statistical differences between the groups at the two different follow-up times. The statistical significance of the differences (after-before) induced by the treatment within each group of patients was tested with Student's t-test for paired data. Differences were considered statistically significant when P < 0.05 (two-tailed test).Values are expressed as mean \u00b1 SD or SEM. Data were normalized according to each patient's baseline situation by calculating the difference (after-before) at the two follow-up times considered (24 weeks after starting the study) and expressing the value as a percentage of the parameters at the baseline. Negative values therefore indicate the effective percentage decrease in the parameter following the treatment. ANOVA followed by the P < 0.001), higher levels of LDL cholesterol (P < 0.001), higher levels of HDL cholesterol (P < 0.001), and lower levels of triglycerides (P < 0.05).P < 0.001), and as regards waist circumference the reduction was \u22122.42 \u00b1 0.82\u2009cm (P < 0.01). All the patients kept to a Mediterranean-style diet for all 24 weeks.P < 0.05), \u221215.45 \u00b1 1.26% in Group 2 (P < 0.001), \u221211.96 \u00b1 1.43% in Group 3 (P < 0.001), and \u221216.94 \u00b1 1.51% in Group 4 (P < 0.001), with significant between-group differences .A significant drop in TC was seen in all groups of patients. In particular, the average reduction in TC 24 weeks after the baseline was \u22124.65 \u00b1 1.92% in Group 1 (P < 0.05) for Group 1, they were \u221221.02 \u00b1 1.63% (P < 0.001) for Group 2, they were \u221212.47 \u00b1 1.75% (P < 0.001) for Group 3, and they were \u221222 \u00b1 2.19% (P < 0.001) for Group 4. The mean percentage change in LDL levels differed significantly between Groups 1 and 2 (P < 0.001), and between Groups 3 and 4 (P < 0.01).The mean percentage changes in LDL cholesterol levels from the baseline were \u22127.34 \u00b1 3.14 (We found no significant differences in terms of HDL cholesterol and triglycerides levels in any of the groups .P < 0.05) and in ALT levels of \u22126.60 \u00b1 2.58% (P < 0.05); in Group 2 we observed a decline in AST levels of \u22127.52 \u00b1 3.31% (P < 0.05) and in ALT levels of \u22127.12 \u00b1 2.96% (P < 0.05).Regarding the safety of treatment with NCP, there was no significant increase in CPK or liver-associated enzyme levels in any of the patients after 24 weeks of treatment. None of the patients discontinued the treatment with NCP and no side effects were observed. As regards liver function, we observed after 24 weeks of treatment a significant drop in AST and ALT levels in type 2 diabetic patients (Groups 1 and 2) from the baseline. In particular, we observed in Group 1 a decline in AST levels of \u22126.48 \u00b1 2.83% based on red yeast rice extract can significantly improve dyslipidemic patients' lipid profiles by comparison with diet alone.The effects of MD on lipid profile and in protecting against cardiovascular risks are well known.In a recent meta-analysis, MD was also found associated with a lower risk of MS; in particular, several studies showed the beneficial role of MD on HDL cholesterol and triglyceride levels . This meIn agreement with previous studies, our findings showed that adherence to MD alone significantly reduced BMI, waist circumference, TC, and LDL cholesterol levels in overweight dyslipidemic patients without type 2 diabetes. The beneficial effects of MD on BMI and waist circumference were not seen in type 2 diabetic patients. These findings are consistent with other reports of weight loss programs proving less effective in overweight and obese diabetic patients \u201318. On tOur results also indicate that associating a combination of NUTs with MD can add to the lipid-lowering effect of MD, in particular on LDL cholesterol, in terms of a 10% improvement in dyslipidemic patients.Berberis and Coptis, with neuroprotective and antiatherosclerotic actions [A previous study demonstrated the effect of a combination of red yeast rice extract and berberine associa actions . Our pre actions . It is w actions , and ste actions , 24, witAssociating a combination of NUTs with a Mediterranean-style diet could prove a valuable therapeutic option for dyslipidemic statin-intolerant patients at low cardiovascular risk without excessively high LDL cholesterol levels.The composition of the NUTs used in some previous studies varied considerably, particularly in terms of the concentration of monacolin K, which ranged from 9.6\u2009mg to 3\u2009mg. The dose of monacolin K in the NUTs used in our study was 3\u2009mg. Becker et al. [We recorded none of the adverse effects described elsewhere in patients treated with red yeast rice \u201327, probTo our knowledge, ours is the first study to show that associating MD with NUTs can improve, in terms of 21%, LDL cholesterol levels of type 2 diabetic statin-intolerant patients. It is often difficult to obtain a normalization of TC, or even of LDL cholesterol, with dietary restrictions alone in diabetic patients, so adding a NUT based on red yeast rice might be a good therapeutic option for type 2 diabetic patients, at low cardiovascular risk with no evidence of vascular damage or other complications, who have previous statin intolerance. The reduction in LDL cholesterol levels achieved with MD plus NUTs was similar to the reduction obtained using statins, as already reported in dyslipidemic patients with statin intolerance .The limitation of this study is the small size of the sample studied; further studies on larger samples will be needed to confirm the validity of this patient management approach, particularly in cases of type 2 diabetes.Despite its limitations, this study provides useful new insight into the nutraceutical/dietary treatment of lipid profiles, even in patients with type 2 diabetes. Our results indicate that MD counseling alone is effective in reducing LDL cholesterol levels in moderately hypercholesterolemic patients with a presumably low cardiovascular risk, but associating MD with the administration of RYR improved patients' lipid profiles considerably more, also in patients with type 2 diabetes with statin intolerance."} +{"text": "Recent work suggests that biological motion processing can begin within ~110 ms of stimulus onset, as indexed by the P1 component of the event-related potential (ERP). Here, we investigated whether modulation of the P1 component reflects configural processing alone, rather than the processing of both configuration and motion cues. A three-stimulus oddball task was employed to evaluate bottom-up processing of biological motion. Intact point-light walkers (PLWs) or scrambled PLWs served as distractor stimuli, whereas point-light displays of tool motion served as standard and target stimuli. In a second experiment, the same design was used, but the dynamic stimuli were replaced with static point-light displays. The first experiment revealed that dynamic PLWs elicited a larger P1 as compared to scrambled PLWs. A similar P1 increase was also observed for static PLWs in the second experiment, indicating that these stimuli were more salient than static, scrambled PLWs. These findings suggest that the visual system can rapidly extract global form information from static PLWs and that the observed P1 effect for dynamic PLWs is not dependent on the presence of motion cues. Finally, we found that the N1 component was sensitive to dynamic, but not static, PLWs, suggesting that this component reflects the processing of both form and motion information. The sensitivity of P1 to static PLWs has implications for dynamic form models of biological motion processing that posit temporal integration of configural cues present in individual frames of PLW animations. Humans are able to perceive the actions of others with relative ease, even when their movements are reduced only to points of light . AlthougPavlova and colleagues [There is currently no consensus on the temporal dynamics of form and motion cue processing. There is some evidence from event-related potential (ERP) studies that the processing of human biological motion begins in the latency range of the occipital-temporal N1 component ,8, whichlleagues ,13 showiThe results of Krakowski et al. showed tIt is possible that differences in ERP quantification may account for the failure of some previous investigations ,8 to detIn an effort to determine whether neural activity in the P1 time range reflects the processing of both form and motion, we conducted two experiments which differed only with respect to whether point-light stimuli were dynamic (containing both form and motion information) or static (containing only form information). In order to directly study bottom-up processing of biological motion stimuli, we chose to utilize a three-stimulus oddball task. This task has frequently been used to evaluate reflexive processing of task-irrelevant distractor stimuli . In our Based on the findings of Krakowski et al. , we predParticipants were recruited from the George Mason University undergraduate population and the surrounding community . Healthy young male and female adults took part in the study, all of whom had self-reported normal vision and no known neurological deficits. Participants recruited from the undergraduate population were provided course credit for participation, while those recruited from the surrounding community were provided with nominal compensation for their time. All participants provided written informed consent after having been explained the procedures of the study. All procedures were approved by the Office of Research Subject Protections at George Mason University.Nine different point-light animations (comprised of 12 white dots on a black background) depicting a human facing in a rightward direction and appearing to walk in place were used for experiment one. Each PLW animation depicted one full gait cycle and was presented at 40 frames/sec for a total presentation time of one second. For scrambled PLWs, local motion was applied to 12 white dots at locations drawn from a two-dimensional normal distribution with mean (and standard deviation) location determined from the mean (and standard deviation) location across joints of the intact PLW. This scrambling procedure ensured that the retinal displacement of the intact and scrambled PLWs was comparable. Point-light animations depicting the typical motion of scissors and pliers (\u201ctool motion\u201d) were presented at a reduced frame rate of 29 frames/second. Detailed methods describing the construction of the tool and PLW stimuli can be found in . For expP = .15) presented target point-light tool motion and withhold responses to both frequently (P = .70) presented standard point-light tool motion and infrequently (P = .15) presented distractor intact or scrambled PLWs. The type of tool motion (pliers or scissors) serving as targets and standards, as well as the type of PLW (scrambled or intact) serving as the infrequently presented distractor, were counterbalanced within subjects across the eight blocks. Participants were encouraged to take breaks between blocks in order to reduce fatigue.After first practicing the task, participants completed eight blocks of a visual three-stimulus oddball task involving centrally presented stimuli. Participants were required to press the left mouse button in response to infrequently . Data was sampled at 500 hz and recorded with an online band-pass filter with cutoffs at .1 and 70 Hz. Ag/AgCl electrode locations followed the standard 10-20 system arrangement and were embedded within an electrode cap. Data was recorded from the following electrode locations: Fp1, F7, F3, FT7, FC3, T7, C3, TP7, CP3, P7, P3, O1, Fz, Cz, Pz, Oz, FP2, F8, F4, FT8, FC4, T8, C4, TP8, CP4, P8, P4, O2. Data was recorded using an in-cap ground (located between FPz and Fz) and reference electrode (located between Cz and CPz). In order to monitor for blinks and eye movements, electrooculogram activity was recorded using two sets of bipolar electrode montages, located at the outer canthus of each eye, as well as above and below the left eye.Following acquisition, all EEG data was processed using the EEGLAB toolbox and ERPLMean amplitude analysis windows were identified in a three-step process. First, a grand average waveform was constructed for epochs time-locked to the onset of both the intact and scrambled PLW stimuli. For each component of interest, the grand average waveform was examined at a subset of electrode locations. Consistent with previous work, we selected the three occipital electrodes, O1, O2 and Oz, for evaluation of the P1 component ,20. For Following the identification of analysis windows, a series of 2 factor (stimulus type by electrode location) ANOVAS were carried out for each component. Where appropriate, a Greenhouse-Geisser epsilon adjustment was used to correct for violations of sphericity (only raw degrees of freedom are reported below). The chance of a type one error during follow-up comparisons was controlled with the use of a Bonferroni correction for multiple comparisons.The mean hit rate and reaction time for targets was 98.36% (SD = 2.45%) and 620 ms (SD = 56 ms), respectively. The mean false positive rate was .48% (SD = .25%) for standards and .25% (SD = .47%) for distractors.Analysis of the P1 component (106\u2013126 ms) revealed a significant main effect of electrode location = 5.97; p = .018) as well as a significant electrode by stimulus type interaction = 6.32; p = .009). Post-hoc analyses revealed that the interaction was due to a right lateralized (electrode O2) increase in amplitude elicited by intact PLWs = 2.96; p = .013) (The mean hit rate and reaction time for targets was 96.23% (SD = .05%) and 605 ms (SD = 71 ms), respectively. The mean false positive rates for standards and distractors were .73% (SD = .49%) and 1.54% (SD = 2.7%), respectively.Analysis of the P1 component (114\u2013134 ms) revealed a main effect of stimulus type = 5.23; p = .043), with increased amplitude in response to intact PLWs resulted in a bias toward intact PLWs that was not present for scrambled PLWs. However, previous work using the three-stimulus oddball paradigm suggests that this is not the case. It has been consistently shown that the orienting response to distractors, as indexed by the P3a component, is stronger when the distractors are more dissimilar to the standards and targets. This phenomenon is even more pronounced when the target and standard stimuli are similar in appearance ,24. In tIt might be argued that the observed modulation of P1 reflects the influence of top-down attention. However, this is unlikely given that we used a paradigm in which the intact and scrambled PLWs were task-irrelevant and unexpected. Consequently, these stimuli were subject to a bottom-up processing bias. We also note that the vast majority of studies showing an effect of top-down attention on P1 are studies of visuospatial attention . MoreoveFeature-based, top-down attentional influences are also unlikely to have affected P1 in the present study, as such influences have only been observed when there is simultaneous competition between overlapping stimuli . In addiAlthough the P1 component has previously been shown to be sensitive to faces but see, it has It is possible that differences in ERP quantification may explain why some previous investigations did not report differences with respect to the P1 component ,8,10. InGiven the differences in ERP quantification across studies ,8, the oThe current study provides evidence that the P1 ERP component is sensitive to both static and dynamic PLWs. The finding that the P1 was sensitive to static PLWs suggests that modulation of this component by dynamic PLWs can be attributed to configural processing alone. This finding has implications for dynamic form models of biological motion perception. These models posit that the percept of biological motion arises as a result of the temporal integration of the global form present in individual frames of PLW animations ,32. The"} +{"text": "We developed EAE in SJL mice by administration of PLP139\u2013151 peptide. The effect of treating these mice with 1\u03b1,25-Dihydroxyvitamin D3 (vitamin D3), or with monomethyl fumarate (MMF) was then examined. We observed that both vitamin D3 and MMF inhibited and/or prevented EAE in these mice. These findings were corroborated with isolating natural killer (NK) cells from vitamin D3-treated or MMF-treated EAE mice that lysed immature or mature dendritic cells. The results support and extend other findings indicating that an important mechanism of action for drugs used to treat multiple sclerosis (MS) is to enhance NK cell lysis of dendritic cells.Experimental autoimmune encephalomyelitis (EAE) is a CD4 These two subsets also differ in their chemokine receptor expression [\u2212CD56\u2212or+ [+ [Natural killer (NK) cells perform several important functions; among them the regulation of the adaptive immune response by secreting cytokines such as IFN-\u03b3 , shaping56dim/\u2212) . The for6\u2212or+ [+ , which i6\u2212or+ [+ .The consensus is that the activity and numbers of NK cells in autoimmune diseases are reduced . However3 deficiency increases the risk of MS, as increased latitude is also correlated with lower blood vitamin D3 levels. For instance, ecological studies showed the amount of exposure to sunlight was inversely correlated with the risk of MS, by both regional distribution and as an association with altitude, as well as individual exposure to sunlight [3 through conversion of 7-dehydrocholesterol to previtamin D3 in the skin, and through further metabolic steps to active hormone 1,25-dihydroxyvitamin D3 [3 intake may reduce the risk of MS in spite of latitude-dependent deficiency, for instance in areas where higher amounts of vitamin D3-rich fish are consumed [3 prevented the disease [3 have not yet been shown, but some studies indicate that serum concentrations of vitamin D3 may affect disease severity. It was also observed that MS patients receiving vitamin D3 have less relapses than control groups [3 in MS patients resulted in improved T regulatory (Treg) cell activity, corroborated with suppression of auto-reactive T cells and a switch from a Th1 to Th2 phenotype [3 [3 and its derivative calcipotriol enhance in vitro NK cell lysis of dendritic cells (DCs), suggesting that a possible mechanism of action for these drugs is via activating NK cells [Vitamin Dsunlight . Sunlightamin D3 . Dietaryconsumed . There i disease ,18. Defil groups . Also, ihenotype . This waotype [3 . In cuprotype [3 . We repoNK cells .3 impaired dendritic cells (DCs) maturation which leads to reducing antigen presentation for encephalitogenic CD4+ T cells [+ NK cell lysis of K562 and RAJI tumor cells [3 and MMF in mice with EAE.Dimethyl fumarate (DMF) a drug used to treat multiple sclerosis (MS) patients, and its metabolite monomethyl fumarate (MMF) have the ability to protect from MS by enhancing Nuclear-factor (erythroid-derived 2)-related factor-2 (Nrf2), leading to the induction of Nrf-2 anti-oxidative pathway responses, thereby exerting neuroprotective effect by Nrf-2 mediated protection in MS tissues . In EAE, T cells , and sub T cells . Recentlor cells . However3, and the third group was fed with MMF, as shown in The protocol for the study design is shown in 3 or feeding them with MMF might reduce the incidence of EAE. During the 50 days of measuring the EAE clinical score, it was observed that injecting vitamin D3 significantly reduced the EAE clinical score in these mice and mature (m) dendritic cells (DCs) [3-treated mice were examined for their ability to kill DCs isolated from the same animals, and so on . To inveo on see . The preP < 0.05, P < 0.05 as compared to untreated EAE mice, The effects of treatment with MMF on EAE clinical score and body weight are shown in 3, or mice with EAE fed MMF. We chose three molecules important for DCs maturation; these included the co-stimulatory molecule CD80, recognition and adhesion molecules CD205 and E-cadherin [3 or MMF. No effect on the expression of CD205 was seen in iDCs is currently being used to treat patients with MS, under the name Tecfidera (Biogen). In EAE, DMF exerts clinical effects by reducing macrophage-induced inflammation in the spinal cord . MMF is et al. [per se has any effect in mice suffering from EAE has not been clearly shown. Our results demonstrate that this molecule reduces the clinical score in mice suffering from EAE. In fact, the activity of MMF is superior to vitamin D3. The effect of MMF is also correlated with the ability of this drug to enhance NK cell lysis of DCs. Hence, NK cells isolated from mice suffering from EAE significantly killed iDCs and mDCs isolated from the same mice. These results suggest that one important mechanism of action in reducing EAE clinical score by MMF is via removing those DCs responsible for activating T cells which cause damages to the myelin sheath during the course of the disease. It was previously reported that MMF affects the differentiation and polarization of DCs [3 on the expression of CD205 in mDCs, and that vitamin D3 and MMF affect E-cadherin expressed in iDCs. However, there was no consistency among the effects of these drugs on iDCs and mDCs to justify enhancing NK cell lysis of both types of DCs. Consequently, we conclude that the effects of these drugs are plausibly exerted on NK cells during treatment. In summary, our findings are the first to demonstrate that one function of vitamin D3 or MMF (not yet used for therapy) is perhaps due to activating NK cells to lyse DCs.Though the effects of DMF in EAE or MS patients have been studied, those related to the activity of MMF are scarce. Scannevin et al. demonstret al. . It was et al. and thatet al. . Howevern of DCs ,32. We oAll animal studies and procedures were approved by Norwegian Animal Research Authority (FOTS) and Department of Comparative Medicine, University of Oslo. Female SJL/J mice at four to six weeks age were purchased from Jackson Laboratory. Mice were kept under pathogen-free conditions at the University of Oslo.139\u2013151 peptide purchased from ABBIOTEC emulsified in complete Freund\u2019s adjuvant (CFA) containing 1 mg Mycobacterium tuberculosis , at four sites in the right and left flanks. Following each injection, 200 ng of Bordetella pertussis toxin was injected intraperitoneal (IP) after 0 and 48 h after immunization with the peptide. The animals were independently observed and monitored daily, and the EAE clinical score was measured according to the following scoring scheme. 0 = no clinical disease, 1 = tail flaccidity, 2 = hind limb weakness, 3 = hind limb paralysis, 4 = forelimb paralysis, and 5 = moribund or death.Female SJL/J (H-2\u02e2) mice ages 4\u20136 weeks old were immunized subcutaneously (SC) with 200 \u03bcg of PLP3 (Sigma-Aldrich), the active form of vitamin D3 every other day, or mice orally gavaged every day with 1 mg MMF (Sigma-Aldrich). On day 7, bone marrow (BM) cells were flushed from the tibia and femur of mice, and the monocytes were isolated using EasySep mouse monocyte Enrichment kit . For generation of iDCs, monocytes were incubated at 2 \u00d7 106 cells/mL supplemented with 25 ng/mL recombinant murine GM-CSF and 6 ng/mL recombinant murine IL-4 , in culture dishes. Mature dendritic cells (mDCs) were generated by adding 1 \u03bcg/mL lipopolysaccharide (LPS) (Sigma-Aldrich). At day 15 post immunization, 5 mice were euthanized with CO2 and spleens were isolated. NK cells were purified from splenocytes using EasySep mouse NK Enrichment kit from STEMCELL Technologies SARL. These NK cells were used to lyse iDCs or mDCs.SJL/J mice were divided into several groups and each group consisted of 10 mice each. The first group was left as control without induction of EAE, the second group where EAE was induced was left without treatment but injected IP with vehicle, and the other group was gavaged with vehicle control. Treated mice include those injected IP with 100 ng 1\u03b1,25-Dihydroxyvitamin D6 cells/mL with 5 \u03bcg/mL calcein-AM for 1 h at 37 \u00b0C in a 5% CO2. Target cells were washed twice and plated at 10,000 cells/well with NK cells into 69-well flat bottom plates at 50:1 E:T ratio in triplicate. The plates were spun down at 500 rpm for 5 min and incubated for 4 h at 37 \u00b0C in 5% CO2. To obtain total killing, target cells were incubated with 0.5% Triton-X (Sigma-Aldrich) for 30 min, whereas total viability was obtained by incubating the cells with medium only. The plates were centrifuged for 8 min and medium was replaced with Dulbecco\u2019s Phosphate Buffered Saline (DPBS) without Ca and Mg (Sigma-Aldrich). The fluorescence intensity of the calcein-AM loaded target cells was measured with BioTek FLX 800 plate reader , using 485/528 nm fluorescence filters. The percentage of cytotoxicity was calculated according to the following formula: % Viability = Fluorescence units (FU) of targets incubated with NK cells , minus FU of targets incubated with Triton-X , divided by FU of targets incubated in media only , minus FU of targets incubated with Triton-X . Percent cytotoxicity was then calculated as 100% minus % viability as described [For NK cell lysis of immature and mature dendritic derived monocytes cells, target cells were incubated at 1 \u00d7 10escribed .5) were washed, suspended in a FACS-buffer , and labeled in the dark for 45 min at 4 \u00b0C with FITC-conjugated rat IgG2a, \u03ba isotype control, APC-conjugated rat IgG, \u03ba isotype control, FITC-conjugated rat anti mouse CD335 (NKp46), APC-conjugated rat anti-mouse NKG2D (CD314) . PE-conjugated mouse anti-NK1.1 , and PE-conjugated mouse IgG2a isotype control were used to stain NK cells. The cells were washed twice, medium was replaced with PBS and the cells analyzed in a flow cytometer . Gating was done according to the isotype controls, and the analysis was performed using FlowJo .NK cells , or FITC-conjugated hamster IgG isotype control . They were also labeled with fluorescein-conjugated rat anti-mouse E-Cadherin, FITC-conjugated rat IgG2A isotype control, PE-conjugated rat anti-mouse DEC-205, or PE-conjugated rat IgG2B isotype control . Labeled cells were washed twice, media replaced with PBS, and then analyzed in flow cytometry.P < 0.05) were calculated using the student t-test, or one way ANOVA followed by Sidak\u2019s test analysis calculated by Graphpad Prism 6 program . Area under curve analysis was performed using the Graphpad Prism 6 program.Significant values (3. Both drugs activate NK cells to lyse immature and mature DCs. This effect is similar to the function of other drugs used to treat MS patients such as glatiramer acetate, fingolimod and natalizumab. The fact that NK cells exposed to these drugs kill both immature and mature DCs may result in the inability of DCs to present antigens to autoreactive T cells. Finally, it is safe to conclude that most if not all drugs used to treat MS have a common function, i.e., enhancing NK cell lysis of DCs. Therefore, we suggest that this method can be used as a screening tool to test any new drug before more efforts and money are put into investigating newly developed MS drugs.Our results are one of the first to show that MMF ameliorates EAE clinical scores in mice. The effect of MMF is comparable or superior to the effect of vitamin D"} +{"text": "Pregnancy and childbirth are risk factors for the development of stress urinary incontinence (SUI). Urinary continence depends on normal urethral support, which is provided by normal levator ani muscle function. Our objective was to compare mean echogenicity and the area of the puborectalis muscle between women with and those without SUI during and after their first pregnancy. t test.We examined 280 nulliparous women at a gestational age of 12\u00a0weeks, 36\u00a0weeks, and 6\u00a0months after delivery. They filled out the validated Urogenital Distress Inventory and underwent perineal ultrasounds. SUI was considered present if the woman answered positively to the question \u201cdo you experience urine leakage related to physical activity, coughing, or sneezing?\u201d Mean echogenicity of the puborectalis muscle (MEP) and puborectalis muscle area (PMA) were calculated. The MEP and PMA during pregnancy and after delivery in women with and without SUI were compared using independent Student\u2019sAfter delivery the MEP was higher in women with SUI if the pelvic floor was at rest or in contraction, with effect sizes of 0.30 and 0.31 respectively. No difference was found in the area of the puborectalis muscle between women with and those without SUI.Women with SUI after delivery had a statistically significant higher mean echogenicity of the puborectalis muscle compared with non-SUI women when the pelvic floor was at rest and in contraction; the effect sizes were small. This higher MEP is indicative of a relatively higher intramuscular extracellular matrix component and could represent diminished contractile function. Pregnancy and childbirth are risk factors for the development of stress urinary incontinence (SUI) , 2. UrinDuring pregnancy SUI has been associated with the width of the hiatal area, and after delivery with the positioning of the bladder neck . The obsIn this study, we set out to assess the association between the puborectalis muscle area (PMA) and SUI symptoms and that between the mean echogenicity of the puborectalis muscle (MEP) and SUI symptoms during and after first pregnancy.This study is a secondary analysis of a prospective observational study on the association between pelvic floor symptoms and changes in pelvic floor anatomy during and after first pregnancy . Two hunThe participants were invited for 3D/4D transperineal ultrasound examination at a gestational age of 12\u00a0weeks and 36\u00a0weeks and 6\u00a0months after delivery. The examinations were performed by two observers, one of the observers had 6\u00a0years\u2019 experience with 3D/4D transperineal ultrasound and the other observer was trained by the experienced observer. We have previously published data on their intra- and interobserver reliability . A GE VoAfter storage on a hard disk, offline analysis was performed using the 4D View 7.0 and Matlab\u00ae R2010a software. The plane of minimal hiatal dimensions in axial position was selected and exported as previously described by Dietz et al. . A semi-Pelvic floor symptoms and physical complaints were scored at every ultrasound examination. SUI was present when a woman answered positively to the Urogenital Distress Inventory question \u201cdo you experience urine leakage related to physical activity, coughing, or sneezing\u201d , 14.The association between the MEP and body mass index (before pregnancy), the mode of delivery , the duration of the second stage of labor (<60\u00a0min and \u226560\u00a0min), the use of oxytocin (yes/no) during delivery, the mean birth weight, and the use of pain relief were assessed for potential confounding effects. t test. Statistical significance was based on two-sided tests, with p\u2009<\u20090.05 considered significant. To determine the magnitude of the effect we calculated the effect size of the statistically significant findings using Cohen\u2019s d.Statistical analysis was performed using SPSS version 20.0 for Windows. The MEP and PMA during pregnancy and after delivery between women with and without SUI were compared using independent Student\u2019sn\u2009=\u20091) and a neurological disorder (n\u2009=\u20091). Other reasons for exclusion were immature labor at 19.9\u00a0weeks\u2019 gestation (n\u2009=\u20091), loss to follow-up, and/or at least one out of three ultrasound volume datasets missing (n\u2009=\u200917), and the symphyses was located outside the view of the ultrasound images (n\u2009=\u20096).Of the 280 women, 26 cases were excluded, leaving 254 women to be studied. Excluded were women who had been included incorrectly because of a twin pregnancy and when the pelvic floor was in contraction (p\u2009=\u20090.04), with effect sizes of 0.30 and 0.31 respectively.The relationship between MEP and SUI is shown in Table The relationship between PMA and SUI is shown in Table None of the potential confounding factors was significantly associated with the MEP.We set out to assess the association between MEP/PMA and SUI during and after first pregnancy. We found that the MEP in women with SUI after delivery was statistically significantly higher than that in women without SUI. However, effect sizes were low, indicating that the clinical relevance is questionable and that MEP cannot be used to differentiate women with SUI from those without.A possible limitation of our study is the absence of pre-pregnancy clinical and ultrasound data. We were only able to look at associations between SUI and MEP and PMA at different time points during and after pregnancy. Changes in MEP and PMA that occurred between pre-pregnant and early pregnant status may have provided extra information on the association between these parameters and SUI. We know from epidemiological studies that childbirth is the major risk factor for developing stress urinary incontinence symptoms. Therefore, we feel it is not an obvious limitation to look at the association between stress urinary incontinence symptoms and ultrasound findings postpartum without having knowledge of pre-pregnancy data. Another limitation is the fact that we had to use the PMA as a surrogate marker for puborectalis muscle volume.The presence of levator avulsions could be a cause of a smaller PMA, as the avulsion area, which is darker, would not have been incorporated into our semiautomatic muscle outline method. However, we previously demonstrated that the reliability of detecting levator avulsions in this particular population of postpartum women, when assessed in a muliticenter, multiobserver setting, is poor . This shWe used a symptom-based assessment of SUI according to the ICS standardization, in line with a previous study , 17. We The strengths of this study are the prospective design and the use of identical ultrasound settings during the examinations, which made echogenicity analyses possible.The higher MEP, i.e., brighter muscle on ultrasound images, is indicative of a change in muscle tissue composition. The ratio between muscle cells and ECM expresses itself in the echogenicity values on ultrasound . Muscle The PMA was not related to SUI, whereas the hiatal area in a previous analysis of our data was . The hypWe previously demonstrated that SUI was associated with a larger hiatal area during pregnancy, and with a more dorsal and caudal positioning of the bladder neck after childbirth . We suggAlthough statistically significant, the difference between the MEP in women with SUI and those without was small. This may be related to the moment of scanning, which was 6\u00a0months after delivery. We do not know how many women were breastfeeding or had returned to their normal menstrual cycle by that time point , 26. EstIn conclusion, women with SUI after delivery were shown to have a statistically significantly higher MEP than non-SUI women when the pelvic floor was at rest and in contraction, although the effect sizes were small. This higher MEP is indicative of a relatively higher intramuscular ECM component and could represent diminished contractile function."} +{"text": "Struma ovarii is a rare ovarian neoplasm that often appears malignant on conventional imaging. Pseudo-Meigs\u2019 syndrome with ascites, pleural effusion, and elevated serum CA 125 levels is much rarer and leads to misdiagnosis of ovarian cancer and unnecessary extended surgery.131I but no uptake of 18F\u2013FDG in the tumor, the preoperative diagnosis was struma ovarii with pseudo-Meigs\u2019 syndrome, which was confirmed histologically. She had no evidence of ascites and pleural effusion six months after surgery.A 50-year-old woman with abdominal distention and dyspnoea was referred to our hospital. Ultrasound, computed tomography (CT), and magnetic resonance imaging (MRI) showed a polycystic ovarian tumor with a solid component, pleural effusion, and massive ascites with negative cytology. Her serum CA 125 level was 1237\u00a0U/ml, indicating the presence of ovarian cancer. Based on increased uptake of 131I scintigraphy and 18F\u2013FDG PET/CT in addition to conventional imaging modalities can provide the precise preoperative diagnosis of struma ovarii with pseudo-Meigs\u2019 syndrome mimicking ovarian cancer, leading to the appropriate treatment strategy.To date, there have been no systematic reviews focused on preoperative diagnosis with imaging modalities. The combination of This tumor often has the appearance of ovarian cancer with a solid component or thick septa in a polycystic tumor, although it is generally benign. Moreover, it can sometimes be associated with massive ascites and pleural effusion, called pseudo-Meigs\u2019 syndrome, leading to the preoperative misdiagnosis of malignancy. To date, preoperative diagnosis with conventional modalities including ultrasound (US), computed tomography (CT), or magnetic resonance imaging (MRI) has been attempted. However, most cases reviewed were misdiagnosed as advanced ovarian cancer, and some of them underwent unnecessary extended surgery. Thus, an optimal diagnostic strategy is needed. Herein, the successful preoperative diagnosis of struma ovarii with pseudo-Meigs\u2019 syndrome with the combination of A 50-year-old woman visited the hospital due to abdominal distension, anorexia, and exertional dyspnoea. She had been in good health and postmenopausal for 1 year. Physical examination showed a markedly distended abdomen. Abdominal US showed a pelvic mass and gross ascites. She was referred to our hospital for further examination and subsequent surgery.Transvaginal US showed the presence of marked ascites and a large solid and cystic mass with a diameter of 8\u00a0cm in the left ovary Fig.\u00a0. Chest XThe patient underwent whole-body FDG PET/CT to confirm malignancy and the presence of lymph node or distant metastases. However, there were no lesions with strong FDG uptake Fig.\u00a0. Moreove131I scintigraphy, which showed strong uptake in the normal thyroid and the pelvic mass (Fig. In this case, compared with the typical imaging appearance of epithelial ovarian cancer, the solid component of the pelvic mass had a higher attenuation lesion and calcifications on CT and a relatively smoother margin on MRI. Moreover, the cystic component had various intensities on T2-weighted images, suggesting the possibility of struma ovarii. Therefore, she underwent ass Fig. . There w3 that originated from the left ovary had no capsule rupture and no adhesions. The uterus and right adnexa were unremarkable. There were no enlarged lymph nodes or metastatic and disseminated lesions in the intraperitoneal organs. Left salpingo-oophorectomy was performed for intraoperative diagnosis. On frozen section examination of the left ovarian tumor, struma ovarii was reported. The patient underwent total abdominal hysterectomy and right salpingo-oophorectomy, because the patient and her family members had insisted on it before surgery.An exploratory laparotomy was performed for diagnostic and therapeutic purposes, and 3300\u00a0ml of yellow serous ascites were evacuated and obtained for cytology. The pelvic mass with dimensions of 10\u00a0\u00d7\u00a08\u00a0\u00d7\u00a07\u00a0cmGross examination revealed a whitish and partly yellowish encapsulated mass with a slightly irregular surface. The cut surface showed multiple cysts separated by thickened septa, including a serous and gelatinous yellowish fluid and solid components Fig.\u00a0. The patThe patient\u2019s postoperative course was uneventful. The pleural effusion disappeared 8 days after surgery. Serum CA 125 levels decreased to 288\u00a0U/ml 4 days after surgery and then returned to normal levels within 2 months. At 6 months, the patient remained disease-free without ascites and pleural effusion.Struma ovarii is a rare specialized form of ovarian teratoma, accounting for only 1% of ovarian neoplasms. More than 50% of the tumor is mature thyroid tissue, although only 5% of patients present with clinical hyperthyroidism . The othUS, CT, and MRI have been widely used for the differential diagnosis of ovarian tumor. Struma ovarii generally contains solid components or thickened septa in the cystic component, similar to ovarian malignancy. However, it also has some specific imaging features. On US, struma ovarii sometimes shows struma pearl with a smooth, roundish, solid area . On CT, In the review of the literature, the focus was on preoperative diagnosis using imaging modalities and struma ovarii with pseudo-Meigs\u2019 syndrome Table\u00a0 1, 9\u201320\u2013209\u201320 o99mTc-pertechnetate has been reported. In some cases, the presence of struma ovarii was unexpectedly suspected when thyroid scintigraphy or PET was performed in the follow-up of patients with thyroid cancer [131I scintigraphy to confirm the diagnosis. There have been no reports of the diagnosis of struma ovarii with pseudo-Meigs\u2019 syndrome by thyroid scintigraphy. Collectively, these reports suggest that thyroid scintigraphy can be useful to define the diagnosis even in struma ovarii with pseudo-Meigs\u2019 syndrome that mimics advanced ovarian cancer.For the differential diagnosis of struma ovarii from other ovarian tumors, the usefulness of thyroid scintigraphy or PET using iodine or d cancer \u201337. In od cancer \u201341. In od cancer . In the 131I for malignant struma ovarii has been reported [However, the differential diagnosis of ovarian tumor with positive thyroid scintigraphy should include malignant struma ovarii and ovarian metastasis from primary thyroid carcinoma. Malignant struma ovarii, defined as thyroid carcinoma arising in struma ovarii, occurs in 5\u201310% of cases with struma ovarii, and about 30% of cases with malignant struma ovarii have extraovarian spread , 44. In reported \u201348. Howereported . Ovarianreported \u201352. Thyr18F\u2013FDG as a tracer reflecting cellular metabolism, has been shown to be worth considering alongside conventional imaging modalities. FDG PET has been reported to offer low diagnostic value and have a limited role in differentiating between malignant and benign ovarian tumors due to low 18F\u2013FDG uptake, although FDG PET may provide additional diagnostic value for detecting lymph node or distant metastases and suspected recurrences in ovarian cancer [PET, particularly with n cancer . In the n cancer , 37. Hown cancer \u201347. Thern cancer . CollectIn summary, the combination of thyroid scintigraphy and FDG PET enabled successful preoperative diagnosis of struma ovarii with pseudo-Meigs\u2019 syndrome mimicking advanced ovarian cancer. These modalities can be useful to avoid unnecessary extended surgery and perform non-invasive surgery instead."} +{"text": "To identify pediatric caregivers' reactions in outpatient surgery settings.A quantitative descriptive/exploratory survey-based study involving application of a semi-structured questionnaire to 62 caregivers in two hospitals.Most caregivers (88.7%) were mothers who submitted to preoperative fasting with their children. Nervousness, anxiety and concern were the most common feelings reported by caregivers on the day of the surgery.Medical instructions regarding preoperative procedures had significant positive impacts on patient care, and on patient and caregiver stress levels. Pediatric outpatient surgical procedures are minor to intermediate surgical interventions requiring short-term admission and eliminating the need for patients sleeping at the hospital. These procedures are often performed under general anesthesia, with or without associated locoregional blocks; inhaled rapid elimination anesthetic agents are often employed for early hospital discharge.,3Eligible patients are prepared at home according to medical instructions. This is a stressful period for the patient's family; emotional involvement may affect peoples' behavior and interfere with solid judgement and common sense, with potential risks for the child and the caregivers. Pediatric patients must be accompanied by a responsible adult. Mothers often assume the caregiver role and are therefore more susceptible to stress, emotional strain and apprehension related issues.,3Clear explanation regarding pre- and postoperative care and potential surgical complications must be provided to patients and caregivers, preferably in written form; patients and caregivers should also be encouraged to voice any doubts or concerns.Caregivers are generally instructed on procedures and related risks, hospital location and hospitalization time, preoperative test requirements and pre-anesthetic assessment, estimated time and duration of surgery, preoperative fasting regimen, as well as reading and signing the informed consent forms for anesthesia and surgery, among other topics.Lay caregivers mothers in particular, tend to subscribe to the belief that children have low tolerance to fasting and that fasting involves high levels of suffering. Therefore, adherence to preoperative fasting guidelines is a major concern. The American Society of Anesthesiologists fasting guidelines are as follows: 4 hours for exclusively breastfed children, 6 hours for children aged 6 to 36 months feeding on nonhuman milk or infant formulas, and 8 hours for children aged over 36 months.Fasting time and composition of the last meal depend upon the type of anesthetic/surgical procedure and estimated time of surgery; instructions are given by the anesthesia team according to evidence-based and good medical practices. Still, professionals working in these settings report high levels of distress, with episodes of dizziness requiring caregivers to sit and take invigorant preparations.Studies investigating caregivers' reactions, particularly maternal reactions after temporarily delegating the care of their children to anesthesia care teams, are scarce.The hypothesis tested in this study was that caregivers, mothers in particular, adhere to patients fasting protocols and abstain from solid food and fluids in the preoperative period, eventually manifesting vagal or hypoglycemic episodes and experiencing similar levels of stress. Confirmation of this hypothesis would translate into specific recommendations for pediatric patients and respective caregivers, aimed to ensure their well-being.To identify pediatric caregivers' reactions in outpatient surgery settings.A quantitative, descriptive, exploratory study based on surveys carried out at two hospitals \u2013 a tertiary-care university hospital located in the city of S\u00e3o Paulo State of S\u00e3o Paulo, (SP) and a general hospital located in the city of Guarulhos State of S\u00e3o Paulo, (SP). Both hospitals had an outpatient surgery unit. Children operated between July and September 2015 were included.The sample in this study comprised 62 caregivers accompanying children scheduled for outpatient surgical interventions at either health care institution. Inclusion criteria were the ability to read the data collection instrument and agreeing to participate. Caregivers aged under 18 years, with comprehension difficulties or who did not accept signing the Informed Consent Term were excluded. were adopted, as follows: 0-1 year, infants; 2-5 years, preschool age; 6-9 years, school age; 10-14 years, early adolescence; 15-16 years, intermediate adolescence; 17-19 years, late adolescence. Caregiver-related variables investigated were kinship, age group, fasting time and use of medications, subjective feelings, and instructions received on surgical and anesthetic procedures, surgical and anesthetic risks.Data collection was based on a pilot questionnaire containing 21 semi-structured questions. The following child-related variables were investigated: sex, age group, surgical procedure and number of siblings. Age categories given by Costa et al.,Universidade de S\u00e3o Paulo, protocol number 1.072.766, CAAE: 45079515.9.0000.0065.Level of understanding and ease of data collection were preliminarily tested in four patients; the original questionnaire was then adapted accordingly. The questionnaire was filled out by caregivers upon hospital admission. All patients were admitted in the morning, regardless of estimated surgery time. Caregivers were requested to read and sign the Informed Consent Term. This project was approved by the Research Ethics Committee of the Descriptive statistics were used and data described as numbers, frequencies, means, medians and standard deviations. Statistical analysis was based on analysis of variance (ANOVA) and the Pearson correlation coefficient and p value. The level of significance was set at 5%.Most children in this study were male (62.9%); preschool age was the predominant (43.5%) age group. General pediatric surgical procedures were more common, with predominance of inguinal herniorrhaphy in males, and tonsillectomy in female children. Most children (58%) had at least one sibling Children were submitted to fasting according to medical instructions. However, fasting times were longer than required in most cases , regardless of child age.Data in Thirteen (21%) caregivers were on medications; however, 4 out of 13 (30.8%) did not follow medication prescriptions on the day of surgery.Instructions on preoperative procedures were given to 61 (98.4%) caregivers; 57 (91.9%), 41 (66.1%) and 31 (50%) out of 62 caregivers were informed about anesthetic protocols, surgical risks and anesthetic risks, respectively.All caregivers answered the question: \u201cHow do you feel about the child undergoing surgical procedures?\u201d More than one feeling was reported by 36 (58%) of interviewees. Concern was the prevailing feeling , followed by anxiety and nervousness . As regards the question \u201cHow do you feel now?\u201d, most interviewees reported to be feeling \u201cwell\u201d , \u201ctired\u201d or \u201chungry\u201d .No significant correlations were detected between caregiver age and feelings experienced during the preoperative phase, caregivers' feelings and child status (single child or not), or child and caregiver age.Conselho Federal de Medicina (CFM) [Federal Council of Medicine], describing outpatient surgery unit as type IV unit \u2013 \u201cunits next to general or specialty hospitals and destined for surgical procedures requiring short-term admission to surgical facilities, within outpatient wards or operating room, with access to medical support structure\u201d.The designation of pediatric outpatient surgical procedures adopted in this study is in compliance with Resolution no. 1,886/2008 of the Conjunto Hospitalar de Sorocaba, State of S\u00e3o Paulo (SP), However, patient profile differed between studies, with samples limited to elective outpatient procedures and including emergency procedures in this and the reference study, respectively. Therefore, in that study, circumcision in males, and appendectomy, in females, were the most common pediatric outpatient procedures.The male sex prevailed in the sample studied. Sex data are in agreement with results of epidemiological investigations carried out at the department pediatric surgery of Preoperative fasting is vital for surgical procedures, which may be cancelled for lack of compliance with this recommendation. This was not documented in this study. Surgery cancellation due to non-compliance with fasting instructions is less common compared to other surgery cancellation criteria; still, it can be easily avoided by providing clear guidelines on preoperative preparation requirements to patients and family members.This study revealed variable fasting times. Compliance with fasting instructions is a major factor in protection against vomiting and pulmonary aspiration at anesthetic induction.Prolonged fasting has negative impacts on surgical interventions. Along with avoidable stress-related effects, dehydration and hypoglycemia in response to solid food and fluid withdrawal are deleterious to postoperative recovery, leading to increased catabolism, increased heart rate and cardiac inotropy, and triggering catecholamine release and other effects on the sympathetic autonomic nervous system.Universidade Federal do Rio Grande do Sul, Porto Alegre (RS), revealed that pediatric fasting times often exceed 18 hours.This study was not specifically designed to investigate pediatric fasting times; still, high levels of compliance to prescribed fasting regimens were documented. However, many patients were fasted for a longer period than necessary, often for more than 12 hours. Excessive preoperative fasting is common and deserves closer attention from health care organizations; pediatric preoperative fasting studies developed at the Caregivers experience higher emotional intensity when dealing with surgical patients and related issues; survey data in this study confirmed the different kinds of feelings experienced by caregivers. The need for surgical treatment is a major factor affecting a family's routine and may interfere with family stability and caregiver emotional state and mood, potentially leading to varying levels of anxiety.Despite instructions to eat properly prior to hospital admission, caregivers often disregard recommendations and undergo long periods of fasting. Proper understanding of medical explanations regarding the patient's status, surgical procedure and preoperative requirements is vital for caregivers' preparation. Physicians must be able to gauge caregivers' level of understanding to provide clear information; technical terms are often misinterpreted and may lead to expectation mismatches, which have negative impacts on final outcomes. so as to promote enhanced participation on health care and related decision making. Effective patient and caregiver education must be achieved in the light of personal preferences, cultural and religious values; reading ability and language skills must also be taken into account.Physicians have a duty to educate patients and caregiversMost caregivers in charge of pediatric patients undergoing outpatient surgical procedures in this study were mothers. Caregivers often followed fasting instructions given to patients, submitting themselves to several hours of solid food and fluid withdrawal. \u201cConcern\u201d, \u201canxiety\u201d and \u201cnervousness\u201d were the most common feelings reported.Longer preoperative fasting time than necessary was a major incidental finding in this study and motivated further research. The topic is currently being investigated at the nutrition department of one of the hospitals involved in this trial.Prevention of unnecessary fasting by caregivers and caregiver education aiming at stress reduction deserve closer attention."} +{"text": "Individualized cerebral perfusion pressure (CPP) targets may be derived via assessing the minimum of the parabolic relationship between an index of cerebrovascular reactivity and CPP. This minimum is termed the optimal CPP (CPPopt), and literature suggests that the further away CPP is from CPPopt, the worse is clinical outcome in adult traumatic brain injury (TBI). Typically, CPPopt estimation is based on intracranial pressure (ICP)-derived cerebrovascular reactivity indices, given ICP is commonly measured and provides continuous long duration data streams. The goal of this study is to describe for the first time the application of robotic transcranial Doppler (TCD) and the feasibility of determining CPPopt based on TCD autoregulation indices.The online version of this article (10.1007/s00701-018-3687-5) contains supplementary material, which is available to authorized users. Continuous monitoring of cerebrovascular reactivity in traumatic brain injury (TBI) is becoming increasingly common in the multi-modal monitoring (MMM) of critically ill patients , 11, 16.Numerous other continuous indices of cerebrovascular reactivity exist in the TBI literature , 32, derhttps://www.compumedics.com.au/diagnostic-solution/transcranial-doppler/), Delica , and Pulse Medical . However, with advancement in robotic TCD technology, it is possible to obtain relatively uninterrupted, extended duration recordings, allowing for the ability to assess CPPopt. In this article, we present a descriptive analysis of the first attempts at estimating CPPopt in critically ill adult TBI patients using extended duration recordings obtained from robotic TCD.However, despite the success with CPPopt determination using ICP indices , 19, it This was a prospective observational study conducted over a 6-month period within our unit, during which we obtained a robotic TCD unit on trial. All patients suffered from moderate to severe TBI and were admitted to the neurosciences critical care unit (NCCU) at Addenbrooke\u2019s Hospital, Cambridge, during the period of November 2017 to May 2018. Patients were intubated and sedated given the severity of their TBI. Invasive ICP monitoring was conducted in accordance with the Brain Trauma Foundation (BTF) guidelines. Therapeutic measures were directed at maintaining ICP less than 20\u00a0mmHg and CPP greater than 60\u00a0mmHg.TCD is a part of standard intermittent cerebral monitoring within the NCCU. The application of the newer robotic TCD device were therefore in alignment with our usual care, negating the need for formal direct or proxy consent. All data related to patient admission demographics and high frequency digital signals from monitoring devices were collected in an entirely anonymous format, negating the need for formal consent, as in accordance with institutional research committee policies. Given limitations with the device, as outlined in a previous publication , not allVarious signals were obtained through a combination of invasive and non-invasive methods. Arterial blood pressure (ABP) was obtained through either radial or femoral arterial lines connected to pressure transducers . ICP was acquired via an intra-parenchymal strain gauge probe . Zeroing of the arterial line occurred at the level of the tragus during the course of this study.www.delicasz.com/html/en). This system allows for continuous extended duration recording of MCA CBFV, using 1.6\u00a0MHz robotically controlled TCD probes, with automated correction algorithms for probe shift. We aimed to record 3 to 4\u00a0h of continuous data from all devices simultaneously, given the previous work from our group on inter-index relationships focused on recording durations of only 0.5- to 1-h duration due to limitations of conventional TCD [Finally, TCD assessment of MCA CBFV was conducted via a robotic TCD system, the Delica EMS 9D . Signal artifact was removed using a combination of manual and semi-automated methods within ICM+ prior to further processing or analysis.Signals were recorded using digital data transfer, with sampling frequency of 100\u00a0Hz, using ICM+ software was determined by calculating the maximum flow velocity (FV) over a 1.5\u00a0s window, updated every second. Diastolic flow velocity (FVd) was calculated using the minimum FV over a 1.5\u00a0s window, updated every second. Mean flow velocity (FVm) was calculated using average FV over a 10\u00a0s window, updated every 10\u00a0s . Pulse amplitude of ICP (AMP) was determined by calculating the fundamental Fourier amplitude of the ICP pulse waveforms over a 10\u00a0s window, updated every 10\u00a0s.Ten second moving averages (updated every 10\u00a0s to avoid data overlap) were calculated for all recorded signals: ICP, ABP (which produced MAP), CPP, FVm, FVs, andFVd. These non-overlapping 10-s moving average values allow focus on slow-wave fluctuations in signals by decimating the signal frequency to ~\u00a00.1\u00a0Hz.Autoregulation indices were derived in a similar fashion across modalities; an example is provided for PRx: A moving Pearson correlation coefficient was calculated between ICP and MAP using 30 consecutive 10\u00a0s windows , updated every minute. Details on each index calculation can be found in Table Data for this analysis were provided in the form of a minute by minute time trends of the parameters of interest for each patient. This was extracted from ICM+ in to comma separated values (CSV) datasets, which were collated into one continuous data sheet .https://www.R-project.org/) was utilized for post processing of ICM+ data outputs, producing population wide binned error bar plots of various cerebrovascular reactivity indices versus CPP, to highlight the population-based parabolic relationships between both ICP and TCD indices with CPP. Mean index values were calculated across 5\u00a0mmHg bins of CPP for the entire population.R statistical software (R Core Team (2016). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. URL Finally, ICM+ was used to produce individual patient CPPopt plots for the purpose of examining feasibility of TCD-based CPPopt estimation in patients with extended duration uninterrupted recordings .During the 6-month trial period for the robotic TCD device, we were able to record 20 critically ill adult TBI patients. Due to limitations imposed by the robotic probe design there are certain contra-indications that prevented its application in some critically ill TBI patients. These include decompressive craniectomy, extensive soft tissue damage to the scalp, and unstable cervical spine . Patient demographics can be seen in Appendix A. Overall, the mean age was 42.6\u2009\u00b1\u200917.6\u00a0years, with 12 patients being male, and a median admission Glasgow Coma Scale score of 7 (inter-quartile range (IQR): 5 to 8). The mean duration of TCD recording was 224.8 \u00b1\u200940.2\u00a0min. Though it is acknowledged, not all recordings were completely uninterrupted, given the need for urgent scans and bedside nursing requests for probe removal in 10 patients.In five patients with the longest duration of uninterrupted recordings , it was possible to estimate CPPopt via plotting mean cerebrovascular reactivity index versus CPP. Figure Using the entire 20 patients recorded, we produced various binned error bar plots of ICP and TCD cerebrovascular reactivity indices across 5\u00a0mmHg bins of CPP. Figure Our simple descriptive analysis of CPPopt estimation using extended duration TCD recordings obtained from robotic TCD highlights what is possible with advanced TCD technology when applied to monitoring critically ill TBI patients. We have demonstrated the feasibility of TCD-based CPPopt estimation, and that these TCD-based cerebrovascular reactivity indices do indeed have a parabolic relationship with CPP, as seen classically with ICP-based indices , 14, 17,Another important aspect is that CPPopt is not a validated CPP target in TBI, as of yet. The concept of CPPopt directed therapy in moderate/severe TBI has not been validated in large prospective randomized control trials. However, numerous retrospective studies have been published assessing the relationship between CPP values outside of the CPPopt range, and global outcome , 19, 24.Given the recorded population consisted of only a small pilot group during the trial period for this device, the ability to extrapolate the results of this study to other populations is limited. This small population is secondary to current technology limitations (as described in the results) and ongoing user presence during recordings. Thus, with this technology, TCD becomes less involved, however, it\u00a0is still somewhat labor intensive compared to other monitoring devices employed in critically ill TBI patients.One may question the use of TCD in the presence of existing ICP monitoring, particularly when it comes to assessing autoregulation/vascular reactivity. However, ICP, as used in PRx calculations, has some conceptual problems when assessing cerebrovascular reactivity. First, ICP is considered a surrogate for changes in pulsatile cerebral blood volume (CBV) and relies on the pressure-volume relationship. TCD is a close direct measure to cerebral blood flow (CBF) and likely provides more useful information for assessment of autoregulation. Second, PRx does not measure autoregulation directly; it is an index of vasodilation/constriction in response to CPP changes, whereas TCD flow velocity reflects changes in CBF, which is what we are really interested in. Finally, ICP is a regional measure that is approximated to global compartmental reading. With TCD, one can obtain territory specific information and assess hemispheric asymmetry.The concept of TCD-based CPPopt may be of question as well, since CPP currently requires ICP to obtain. However, efforts are ongoing for non-invasive TCD-based methods of CPP measurement , 26, 27.With the application of robotic TCD technology, it is possible to obtain extended duration recordings in critically ill TBI patients, allowing for the approximation of CPPopt using TCD-based cerebrovascular reactivity indices. As robotic TCD technology continues to advance, further long-term recording is becoming possible, with minimal user input, allowing for inclusion of continuous, uninterrupted, TCD monitoring into the standard set of neuromonitoring modalities in TBI patients.ESM 1(DOCX 13\u00a0kb)ESM 2(DOCX 111\u00a0kb)"} +{"text": "Mental health and substance use disorders are the leading causes of global disability in children and youth. Both tend to first onset or escalate in adolescence and young adulthood, calling for effective prevention during this time. The Climate Schools Combined (CSC) study was the first trial of a Web-based combined universal approach, delivered through school classes, to prevent both mental health and substance use problems in adolescence. There is also limited evidence for the cost-effectiveness of school-based prevention programs.The aim of this protocol paper is to describe the CSC follow-up study, which aims to determine the long-term efficacy and cost-effectiveness of the CSC prevention program for depression, anxiety, and substance use up to 7 years post intervention.Climate Substance Use , (3) Climate Mental Health , or (4) CSC . It was hypothesized that the CSC program would be more effective than conditions (1) to (3) in reducing alcohol and cannabis use (and related harms), anxiety, and depression symptoms as well as increasing knowledge related to alcohol, cannabis, anxiety, and depression. This long-term study will invite follow-up participants to complete 3 additional Web-based assessments at approximately 5, 6, and 7 years post baseline using multiple sources of locator information already provided to the research team. The primary outcomes include alcohol and cannabis use (and related harms) and mental health symptoms. An economic evaluation of the program will also be conducted using both data linkage as well as self-report resource use and quality of life measures. Secondary outcomes include self-efficacy, social networks, peer substance use, emotion regulation, and perfectionism. Analyses will be conducted using multilevel mixed-effects models within an intention-to-treat framework.A cluster randomized controlled trial (the CSC study) was conducted with 6386 participants aged approximately 13.5 years at baseline from 2014 to 2016. Participating schools were randomized to 1 of 4 conditions: (1) control , (2) The CSC long-term follow-up study is funded from 2018 to 2022 by the Australian National Health and Medical Research Council (APP1143555). The first follow-up wave commences in August 2018, and the results are expected to be submitted for publication in 2022.This is the first study to provide a long-term evaluation of combined universal substance use and mental health prevention up to 7 years post intervention. Evidence of sustained benefits into early adulthood would provide a scalable, easy-to-implement prevention strategy with the potential for widespread dissemination to reduce the considerable harms, burden of disease, injury, and social costs associated with youth substance use and mental disorders.PRR1-10.2196/11372 Mental health and substance use disorders are the leading causes of global disability, accounting for 25% of total disability in children and youth . Every yEpidemiological studies show that between 40% and 50% To halt the escalation and associated burden of disease, prevention efforts need to be commenced before the onset and acceleration of substance use, depressive, and anxiety symptoms into well-established patterns and disorders. Adolescence is a key time to do this. School-based programs have been shown to reduce both substance use and depression and anxiety symptoms -22. HoweFew studies have examined the effectiveness of prevention approaches for substance use, depression, and anxiety beyond secondary school. There is limited evidence from studies in the United States ,26 that CSC intervention; (2) Climate Schools Substance Use; (3) Climate Schools Mental Health, or (4) Control . Participants allocated to the CSC intervention received 18 \u00d7 40-min classroom lessons focused on depression, anxiety, alcohol, and cannabis. Each lesson includes both computer-based and manualized classroom activities. The computer-based component is delivered on the Web to individual students who log on to view cartoon storylines that impart information about anxiety and depressive symptoms, alcohol, and cannabis. The classroom activities are delivered by the teacher and aim to reinforce the learning outcomes outlined in the cartoons and allow interactive communication between students. These lessons adopt a harm minimization approach in relation to substance use and utilize cognitive behavioral skills and strategies to assist students in identifying and reducing problematic mental health symptoms. Those allocated to Climate Schools Substance Use intervention received 12 \u00d7 40-min lessons focused on alcohol and cannabis use, those allocated to the Climate Schools Mental Health intervention received 6 \u00d7 40-min lessons focused on anxiety and depression, whereas those in the control condition received health education as usual. Further details about the intervention components and groups have been previously reported [CSC intervention) targeting depression, anxiety, and substance use in reducing the onset and escalation of mental health symptoms, substance use and related harms, and increasing knowledge in relation to these issues. A total of 71 schools and 6386 students aged 13 to 14 years at baseline participated in the trial. Although the initial phase of the study did not specifically aim to test the cost-effectiveness of the intervention, resource use questions used in cost-effectiveness analysis were included in the study from baseline (2014).The Climate School Combined (CSC) study commenced in 2014 as the first randomized controlled trial (RCT) of a combined approach to preventing depression, anxiety, and substance misuse in adolescence . This streported . The priAt ages 17 to 18 years, the CSC trial cohort is now nearing early adulthood in 2018. This transition, from adolescence to early adulthood, represents a unique developmental period characterized by numerous personal and social role changes including new social relationships and living arrangements, increased financial and social independence, and pursuit of employment and/or higher education. Along with increased exposure to alcohol and cannabis during this period, mental health symptoms often become more pronounced with the onset of new challenges, increased autonomy, and formation of new friendship circles. A review of longitudinal epidemiological studies focusing on the transition from adolescence to young adulthood found that rates of any mental or substance use disorder more than doubled, as did the use of illicit drugs . SpecifiDespite evidence demonstrating that school-based prevention efforts can interrupt the trajectory of growth in substance use and mental health symptoms during adolescence ,22,37, vThe CSC long-term follow-up study will be the first in the world to examine the long-term effectiveness of a combined approach to the universal prevention of anxiety, depression, and substance use disorders delivered on the Web. It will extend the follow-up of the existing CSC cohort by an additional 3 time points . There is limited evidence to suggest that when delivered in isolation, mental health and substance use prevention programs have secondary benefits on comorbid conditions ,42. ThisThere is limited evidence demonstrating the value for money of school-based programs to prevent depression, anxiety, and minimize substance abuse. An economic evaluation of school-based programs to prevent depression in adolescents aged 11 to 17 years demonstrated that both were cost-effective in an Australian context . HoweverThe study will conduct a long-term (7-year) follow-up of the first RCT of a combined Web-based substance use and mental health prevention approach addressing the following research questions:Climate Substance Use), (2) universal mental health prevention , and (3) education as usual (control condition) for:RQ1: Is the combined approach used in the CSC program more effective in the long-term across the transition into early adulthood (ages 18-21 years) compared with: (1) universal substance use prevention will be cost-effective compared with (1) school-based prevention as usual, (2) stand-alone universal school-based substance use prevention, and (3) stand-alone anxiety and depression prevention, where Aus $50,000 per quality-adjusted life year is taken as the benchmark for cost-effectiveness in Australia.The study was approved by the University of Sydney Human Research Ethics Committee, Australia (2018/906), and all participants provided informed consent to participate in the original CSC study. All participants will provide additional informed consent before participating in further follow-up surveys.Climate Substance Use , (3) Climate Mental Health , or (4) CSC . Blocked randomization was used, allocating schools to the 4 conditions in equal ratios in blocks of 4. The CONSORT diagram . The final cohort at baseline consisted of 6386 year 8 students from 71 schools . Participating schools were randomized to 1 of 4 conditions: (1) Control , (2) gram see summarizThe CSC long-term follow-up study will extend data collection up to 7 years post baseline. Using multiple sources of locator information already provided to the research team , all participants will be invited to consent to take part in the long-term follow-up and then complete 3 Web-based assessments at approximately 5, 6, and 7 years post baseline. Participants in the state of Queensland complete school 1 year earlier than participants in New South Wales and Western Australia. To collect data from Queensland participants in their first year post school, follow-up will commence in Queensland from August 2018 to January 2019, whereas data collection will run from January 2019 to June 2019 in New South Wales and Western Australia. Participants will consent to take part in the longitudinal follow-up study and provide additional consent to release their Medicare Benefits Schedule and Pharmaceutical Benefits Scheme information to the research team. Subsequent contact with students will be made via email invitation or via school with reminder emails and texts sent once a week for 3 weeks. Those who cannot be reached via email will be contacted via alternative forms of locator information, including short messaging service (SMS) and social media. If no response is received, participants will be followed up via phone calls, and paper surveys will be mailed to their home address. Participants will be contacted via the locator information provided until a response is received.\u201cParticipants will be directed to the CSC website through a personalized URL to complete written consent procedures and complete the survey (approximately 30-45 min in duration) on the Web. Responses will be deidentified and linked over time using a unique identification code. Participants will be reimbursed Aus $20 in the form of a gift voucher for each survey occasion they complete. A duty of care procedure has been developed and approved by the University of New South Wales Human Research Ethics Committee and will be followed if a participant self-identifies as at risk of harm during the study. This includes automatic emails to participants with detailed information about support services if their response indicates they are at risk of harm.and in each of the 3 states of Australia where recruitment took place. These calculations accounted for 10% dropout at the school level and indicated that 2800 students recruited from 28 schools in each state would achieve 80% power to detect a between-group mean difference of 0.15 (at the P<.05 level) with 7 measurement occasions. In our original study, we achieved a total sample size of 6386 students. Although this is not sufficient to do analyses at the state level, the total sample size is more than sufficient and far surpasses the 2800 required to detect the expected differences across the whole sample. As initiation and frequency of substance use as well as levels of depression and anxiety increase over the transition to early adulthood [Climate Substance Use (d=0.15), Climate Mental Health (d=0.15), and the CSC intervention (d=0.2). Power calculations based on the obtained sample, where there were at least 16 schools, and an average of at least 80 students per school in each intervention group show that the power to detect an effect size of d=0.15 at the final long-term follow-up would be >90%.Participants for this study come from 6386 students from 71 schools recruited to the original CSC study. Power calculations for the original trial were based on methods developed to detect intervention by time interactions in longitudinal cluster RCTs and ensudulthood , larger Where possible, measures have remained consistent from the original CSC study to the long-term follow-up study. Some measures have been amended or updated to be age appropriate as participants transition out of school. Details of all included measures in the long-term follow-up study are outlined below.Demographic data including gender, age, country of birth, truancy rates, and academic performance were obtained at baseline to determine the equivalence of groups. All follow-up outcomes will be assessed by validated self-report measures, which have been shown to be valid and reliable in adolescent populations -55.Climate Schools trials [Drinking behaviors in the past 6 months will be assessed using an adapted version of the Patterns of Alcohol index . Particis trials -59 and as trials . This 16s trials . Items cCannabis use will be assessed by 4 items from the National Drug and Alcohol Strategy Household Survey (NDSHS) . Items wA total of 6 items from the NDSHS , allowinPsychological distress in the past month will be assessed by the Kessler 6 scale and the Participants\u2019 will be consented for access to Medicare Benefits Schedule and Pharmaceutical Benefits Scheme data providing detailed information on the number and cost of contacts with health care professionals and prescription medications reimbursed through these commonwealth-funded plans. These data will be obtained from the Department of Human Services for up to a 4.5-year period from the date of extraction. A retrospective 12-month questionnaire will also be used to capture resource use outside of Medicare Benefits Schedule and Pharmaceutical Benefits Scheme data, in addition to capturing some overlapping data for those participants who may not agree to this data access. The resource use questionnaire was adapted from the Client Services Receipt Inventory and willHealth outcomes will be assessed by the Child Heath Utility-9D , a pediaAll participants completing Web versions of the survey will also complete a social networks survey at each time point consisting of questions adapted from O'Malley et al and Lau-Other secondary measures that will be administered include the following: (1) Bandura\u2019s Resistive Self-Regulatory Efficacy Scale ,76 will Intention-to-treat analyses will be carried out for all primary and secondary outcomes in the trial, including all participants in the groups they were initially randomized to. Multilevel mixed-effects regression models will be used to assess these outcomes. Where appropriate, generalized mixed-effects models will be applied, for example, using logistic regression for dichotomous outcomes.Multilevel models are able to account for the clustered design of the trial by taking into account the expected correlations between the multiple observations of each participant and between participants in the same school . MultileModels will include dummy-coded intervention terms that compare each intervention with the reference control group and time terms reflecting the survey occasion, along with covariates such as gender to adjust for possible confounding. The effects of greatest interest for assessing the effectiveness of the interventions are intervention \u00d7 time terms that provide baseline-adjusted estimates of how each intervention group has changed relative to control. Interpretable measures of effect size such as odds ratios and standardized mean differences will be calculated for all effects as well their accompanying CIs.Given that some outcome data are expected to be missing due to loss to follow-up, the analysis must also account for missing data. As mixed-effects models employ maximum likelihood estimation, they produce unbiased estimates when missing data can be assumed to be either missing completely at random or missing at random and are Climate Substance Use intervention, Climate Mental Health intervention, and standard education received by the control group [CSC versus Control, CSC versus Climate Mental Health, and CSC versus Climate Substance Use including all participants allocated to each of these intervention groups.The primary aims of the original CSC trial were to assess the efficacy of the combined CSC intervention in comparison with the stand-alone ol group . TherefoThe cost to deliver each intervention will be combined with the additional resources used by participants over the follow-up period to calculate total costs from the Australian health sector and societal perspectives as recommended by current guidelines . InterveTo explore possible mechanisms for the interventions\u2019 effectiveness, planned moderation analyses will be conducted to examine whether measures of baseline risk moderate the intervention effects. Baseline measures of risk will be investigated in relation to alcohol and other substance use, harms related to substance use, and mental health symptoms.The CSC long-term follow-up study is funded from 2018 to 2022 by the Australian National Health and Medical Research Council (APP1143555). The first follow-up wave commences in August 2018, and the results are expected to be submitted for publication in 2022.This paper outlines the study protocol and design of an extended long-term follow-up of the CSC study cohort into late adolescence and early adulthood. The study aims to (1) examine the long-term effectiveness of a combined universal mental health and substance use program (CSC program) in preventing substance use (and related harms) and reducing mental health symptoms up to 7 years post baseline and (2) evaluate the cost-effectiveness of the program over the long term. In addition, we will explore intervention effects on secondary outcomes including self-efficacy, social networks, peer substance use, emotion regulation, and perfectionism into young adulthood as well as key mediators and moderators of intervention effects.This study will address a significant gap in knowledge by determining for the first time the longevity of school-based universal prevention for substance use and mental health delivered via the Web into young adulthood as well as conducting 1 of the first cost-effectiveness studies of Web-based prevention for mental health and substance use up to 7 years post baseline. Furthermore, this will be the first study to examine unique effects of combining substance use and mental health prevention over the long term. As with the original CSC study, 2 key limitations of the study are participant attrition and reliance on self-report for the majority of measures. Although follow-up rates for the original CSC study remained relatively high across survey waves (ranging from 67% to 88%), it is anticipated that the addition of a new round of consent and participants transitioning from school to postschool environments in this study will present additional challenges and increase study attrition. Anticipated barriers include incomplete and changing contact details, participant relocation , and a lack of follow-up support from teachers as participants complete school. To aid in participant follow-up, a set of detailed follow-up strategies will be developed, including a procedure using a wide range of mediums to contact participants , obtaining contact details from one other person who is likely to know how to contact the participant should their contact details change, and adequately reimbursing participants for their time (Aus $20 reimbursement). Reliance on self-report data for the majority of collected measures may introduce bias related to social desirability, particularly in relation to illegal or risky behaviors such as drug use. Nonetheless, self-reported substance use has been shown to be both reliable and valid ,54, espeHarms relating to early substance use and development of mental health problems are a serious concern, and the transition into early adulthood represents a key risk period. Despite this, very little is currently known about the effectiveness of school-based prevention programs beyond school age. This study addresses a critical knowledge gap and will indicate if prevention approaches for anxiety, depression, and substance use can have lasting effects. Furthermore, this study will provide a critical economic evaluation of the long-term effects of a combined universal approach to prevent substance use and mental health problems among young people. This knowledge is vital to inform policy both nationally and internationally as economic modeling suggests substantial societal benefit can be gained from even modest reductions in substance use and mental health ,50,86. E"} +{"text": "The interaction of tumor necrosis factor-like weak inducer of apoptosis (TWEAK) and its receptor fibroblast growth factor inducible 14 (Fn14) participates in inflammatory responses, fibrosis, and tissue remodeling, which are central in the repair processes of wounds. Fn14 is expressed in main skin cells including dermal fibroblasts. This study was designed to explore the therapeutic effect of TWEAK on experimental burn wounds and the relevant mechanism underlying such function. Third-degree burns were introduced in two BALB/c mouse strains. Recombinant TWEAK was administrated topically, followed by the evaluation of wound areas and histologic changes. Accordingly, the downstream cytokines, inflammatory cell infiltration, and extracellular matrix synthesis were examined in lesional tissue. Moreover, the differentiation markers were analyzed in cultured human dermal fibroblasts upon TWEAK stimulation. The results showed that topical TWEAK accelerated the healing of burn wounds in wild-type mice but not in Fn14-deficient mice. TWEAK strengthened inflammatory cell infiltration, and exaggerated the production of growth factor and extracellular matrix components in wound areas of wild-type mice. Moreover, TWEAK/Fn14 activation elevated the expression of myofibroblastic differentiation markers, including alpha-smooth muscle actin and palladin, in cultured dermal fibroblasts. Therefore, topical TWEAK exhibits therapeutic effect on experimental burn wounds through favoring regional inflammation, cytokine production, and extracellular matrix synthesis. TWEAK/Fn14 activation induces the myofibroblastic differentiation of dermal fibroblasts, partially contributing to the healing of burn wounds. Burn injuries, especially thermal burns, are frequently observed in the hospital setting. Burn wound repair is a dynamic process with overlapping phases, including initial inflammatory and subsequent proliferative phases. A period of tissue regeneration consists of epithelialization, angiogenesis and collagen accumulation in a remodeling process to restore the tissue . In thirTumor necrosis factor-like weak inducer of apoptosis (TWEAK) is a regulator of proinflammatory cytokines, and acts through binding to its receptor fibroblast growth factor-inducible 14 (Fn14). TWEAK is mainly produced by immune cells such as macrophages that infiltrate in inflamed tissue . Fn14 isRecently, it was found that moderate TWEAK/Fn14 signals exhibit a protective role in cardiac wound repair by promoting myogenesis and angiogenesis . Also, wAs described previously , Fn14-deBurn wounds were created in mice as previously described . The micThese mice were randomly divided into three groups, with 5 mice in each group. The blank group received no further treatment. The NaCl and TWEAK groups received daily topical administration of normal saline or recombinant murine TWEAK , respectively.The mice were then sacrificed on days 0, 3, 7, 14, and 21, followed by the harvest of skin tissues, which were identical to the original burned areas. The harvested skin tissue was equally divided into four parts for further experiments. The Hospital Research Ethics Committee approved all mouse protocols in this study (No. 2016028).Some tissue was routinely processed for paraffin sections. After deparaffinization and rehydration, immunohistochemistry was performed as described previously . The blo2 area was also counted on these sections. The wound area and histological evaluation were detailed in Supplementary Table Some sections were stained with hematoxylin-eosin or Masson\u2019s trichrome solution. The epidermal thickness was measured with H&E-stained sections. The number of appendage-like structures per mmPrimary human dermal fibroblasts were purchased from Life Technologies Co. and were cultured in medium 106 supplemented with low serum growth supplement (Life Technologies). Some cells were transfected with control (#AM4611) or Fn14 (#135142) siRNA (Life Technologies) . The traPrior to TWEAK stimulation, the cells were starved in 2% fetal bovine serum-supplemented medium for 24 h. The cells were stimulated with human recombinant TWEAK . Some cells were pretreated with the specific inhibitors of the NF-\u03baB , Wnt/\u03b2-catenin , EGFR , p38 mitogen-activated protein kinase (MAPK) and Smad3 signaling pathways at 24 h before TWEAK stimulation.The cells growing on a glass-bottomed culture dish were fixed with cold acetone. Cells were then incubated with Alexa Fluor 488-conjugated rabbit IgG targeting alpha-smooth muscle actin (\u03b1-SMA) . Rabbit anti-palladin IgG and Alexa Fluor 647-conjugated goat anti-rabbit IgG were used to detect palladin expression . After 4\u2032,6-diamidino-2-phenylindole incubation, the cells were observed under a digital confocal microscope .Alexa Fluor 488-conjugated rabbit IgG targeting \u03b1-SMA (2 \u03bcg/ml) was also used for flow cytometry. Similarly, rabbit anti-palladin IgG and Alexa Fluor 647-conjugated goat anti-rabbit IgG were used to detect palladin expression (2 \u03bcg/ml). Flow cytometry was performed by using an LSRII instrument . Data were analyzed using a FlowJo7.6.1 software .Total RNA was extracted from fresh tissues or cell cultures by a PureLink RNA kit . cDNA was prepared by using a commercial cDNA kit . qRT-PCR was carried out as described previously . The priProtein lysates were routinely extracted from fresh tissues or cell cultures. Western blotting was performed as described previously . Rabbit t-test was used for the comparison of two groups only. Differences were considered significant at p < 0.05.All data are expressed as the means \u00b1 standard error of the mean (SEM). Statistical analysis was performed using GraphPad Prism version 5.0 . Analysis of variance was used for the comparison of more than two groups of variables. On the other hand, two-tailed Student\u2019s p < 0.05) . By immunohistochemistry, the expressions of TWEAK and Fn14 were also stronger in skin since day 3 . We further explored the effect of exogenous TWEAK on wound healing in this murine model. The skin lesion in wild-type mice was treated topically with recombinant TWEAK . Surprisingly, the wounds in the TWEAK-treated group healed at a faster rate than that in the blank or normal saline controls . The TWEAK-treated group had less wound areas than the controls on days 3, 7, and 14 (p < 0.05). No significant difference was found between the two controls at any time point (p > 0.05). All burn wounds healed completely on day 21, with hairs growing unevenly in original area . Furthermore, on day 14, the TWEAK-treated mice exhibited less epidermal thickness but had more appendage-like structures than the controls (p < 0.05) .Firstly, the expression levels of TWEAK and Fn14 were determined in lesional regions of the wild-type mice. By both qRT-PCR and Western blotting, it showed that their expression levels increased significantly after wound creation (p > 0.05) . All wounds healed on day 21 .The therapeutic experiments were also performed in the Fn14-deficient mice. However, no significant differences in wound area (days 3\u201321) or epidermal regeneration (day 14) were found among the three experimental groups (p < 0.05) . The TWEAK-treated mice had higher RANTES mRNA levels than the blank mice (p < 0.05), which were comparable between the TWEAK- and normal saline-treated mice (p > 0.05) . However, there were no significant differences in the mRNA level of IP-10 between the three groups (p > 0.05) . The proteins of these molecules were further studied through Western blotting at this time point, and it was shown that the TWEAK-treated mice expressed more RANTES, MCP-1, TGF-\u03b21, EGFR and MMP-9 proteins than the two control groups (p < 0.05) . By immunohistochemistry or immunofluorescence, the TWEAK-treated mice had stronger TWEAK, EGFR, and MMP-9 staining, accompanied by more Iba-1 or CD3 positive cells in wound areas .We also found that, on day 14, the TWEAK-treated mice had higher mRNA expression levels of MCP-1, TGF-\u03b21, EGFR and MMP-9 in wound areas than that in the two controls . Moreover, the HAS-1 and laminin \u03b11 protein expression levels, as well as their mRNA expression levels, were higher in the TWEAK-treated mice on day 14 (p < 0.05) .Cutaneous collagen was evaluated in Masson\u2019s trichrome-stained sections, in which a similar collagen fraction distribution was demonstrated between the two control groups (day 14). However, the TWEAK-treated mice had stronger collagen staining than that of the controls on day 14 (p < 0.05) . Moreover, the mRNA and protein expression levels of \u03b1-SMA exhibited time-dependent (0\u201348 h) variation tendencies upon TWEAK stimulation (p < 0.05) . Dermal fibroblasts were pre-transfected with Fn14 siRNA and were then stimulated with TWEAK . Through immunofluorescence and flow cytometry, transfection with Fn14 siRNA, but not with control siRNA, partially abrogated TWEAK-induced upregulation of \u03b1-SMA protein (p < 0.05) .Myofibroblastic differentiation of dermal fibroblasts plays a central role in wound healing . The effp < 0.05) . By immunofluorescence and flow cytometry, transfection with Fn14 siRNA abrogated the upregulation of palladin induced by TWEAK stimulation (p < 0.05) .We also determined palladin in dermal fibroblasts, which is another important marker for myofibroblast differentiation , and we p < 0.05) .Dermal fibroblasts were further pretreated with specific inhibitors of the nuclear factor (NF)-\u03baB (JSH-23), Wnt/\u03b2-catenin (XAV939), EGFR tyrosine kinase (erlotinib), p38 mitogen-activated protein kinase , and Smad3 (SIS3) pathways. Interestingly, these inhibitors, except XAV939, significantly reduced both \u03b1-SMA and palladin expression that was enhanced by TWEAK (in vitro. Such effect of TWEAK on dermal fibroblasts involves the NF-\u03baB, EGFR, p38 MAPK and Smad3 pathways. Therefore, TWEAK exhibits therapeutic effect on burn wounds, possibly involving the regulation of dermal fibroblasts.In this study, we found that topical administration of TWEAK accelerates the healing of experimental burn wounds. TWEAK strengthens inflammatory responses, enhances growth factor production, and amplifies extracellular matrix synthesis in lesional tissue. Moreover, TWEAK/Fn14 interaction induces myofibroblastic differentiation of dermal fibroblasts Since TWEAK is mainly produced by infiltrating cells such as macrophages and monocytes , its expThe TWEAK-regulated downstream cytokines or receptors include RANTES, MCP-1, IP-10, and EGFR , which pExtracellular matrix plays a pivotal role in fibroblast activation, wound contraction, and tissue remodeling during skin wound repair . CollageDermal fibroblasts are critical in the healing of burn wounds. Actually, TWEAK regulates basic function of dermal fibroblasts, including the secretion of interleukin-19 and thymic stromal lymphopoietin . InterleTopical application of TWEAK strengthens inflammatory responses, cytokine production, and extracellular matrix synthesis, which synergistically accelerate the healing of experimental burn wounds. TWEAK can promote the myofibroblast differentiation of dermal fibroblasts. Future studies should be focused on the relevant mechanism by which TWEAK regulates other skin cells, and on the approaches how to improve therapeutic effect of TWEAK treatment.JL participated in the design of the study, and performed the experimental work. LP performed the animal experiments. YL, KW, SW, XW, and QL carried out some experiments. YX and WZ conceived and designed the study and prepared the manuscript. All the authors read and approved the final manuscript.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Restenosis remains a significant problem after angioplasty of hemodialysis vascular access. Both experimental and clinical studies have shown a protective effect of antioxidants against post-angioplasty restenosis. A prospective, randomized, feasibility study was conducted to investigate the effect of ascorbic acid to prevent restenosis. Ninety-three hemodialysis patients were randomized into three groups after angioplasty: placebo (n\u2009=\u200931), 300\u2009mg ascorbic acid (n\u2009=\u200931), and 600\u2009mg ascorbic acid (n\u2009=\u200931), treated intravenously 3 times per week for 3 months. Eighty-nine completed the clinical follow-up, and 81 had angiographic follow-up. In the angiographic follow-up, the mean (stand deviation) late loss of luminal diameter for the placebo, 300\u2009mg, and 600\u2009mg groups were 3.15 (1.68) mm, 2.52 (1.70) mm (P\u2009=\u20090.39 vs. placebo group), and 1.59 (1.67) mm , with corresponding angiographic binary restenosis of 79%, 67% (P\u2009=\u20090.38 vs. placebo group), and 54% (P\u2009=\u20090.08 vs. placebo group). The post-interventional primary patency rates at 3 months were 47%, 55% (P\u2009=\u20090.59 vs. placebo group), and 70% (P\u2009=\u20090.18 vs. placebo group) for placebo, 300\u2009mg, and 600\u2009mg groups. Our results demonstrated that intravenous 600\u2009mg ascorbic acid was a feasible therapy and might attenuate restenosis after angioplasty; however, its effect on post-interventional primary patency was modest. Percutaneous transluminal angioplasty (PTA) is widely used as a primary therapy for stenosis of dialysis access2. However, restenosis usually develops early after PTA, and the long-term durability of PTA is limited. Moreover, less than half of native accesses remained patent at 1 year, and the outcome is poorer for prosthetic accesses4. Restenosis usually requires repeated interventions, causing a large financial burden on the health care system. Various mechanical and pharmacological approaches have been developed to prevent restenosis8; however, the beneficial effects remain very limited and none of these approaches was recommended by guidelines2.A well-functioning hemodialysis vascular access influences the morbidity and mortality of patients with end-stage renal disease. The most common cause of dialysis access dysfunction is stenosis of the outflow veins9. Recent studies have demonstrated that oxidative stress and inflammatory cytokines are implicated in the stenosis of dialysis access11. In animal studies, antioxidants have been shown to prevent restenosis after angioplasty14. In humans, studies of coronary interventions also suggested a promising role of antioxidants in preventing restenosis17. Ascorbic acid is a potent antioxidant, and the plasma level of AA is generally lower in patients undergoing maintenance hemodialysis18. We and other investigators have previously shown that supplementation of AA in hemodialysis patients improved oxidative stress and inflammation, and corrected anemia21. Therefore, we conducted this feasibility study to test the hypothesis that AA could reduce the severity of restenosis after PTA and to evaluate the effect of two different doses (300\u2009mg and 600\u2009mg) of AA on restenosis after PTA.Uraemia is associated with increased oxidative stress and depletion of protective antioxidants, which are further complicated by dialysisL/min in arteriovenous fistulas and <600\u2009mL/min in arteriovenous grafts, as assessed using the ultrasound dilution method ; and (4) increased venous pressure during dialysis, defined as dynamic venous pressure >150\u2009mmHg in arteriovenous fistulas and >160\u2009mmHg in arteriovenous grafts under a dialysis blood flow of 250\u2009mL/min for three consecutive times. The exclusion criteria include patients who were (1) hospitalization for infection, heart failure, or acute coronary syndrome in the recent 3 months, (2) inability to comply with follow-up visits, and (3) use of AA or other antioxidant supplements before study enrollment. The study protocol was based on the Declaration of Helsinki , approved by the institutional review board of National Taiwan University Hospital, Hsinchu Branch, and registered at ClinicalTrials.gov . Informed consent was obtained from all participants.This was a prospective, randomized, controlled, feasibility study designed to assess the feasibility of using two different doses of AA compared with the placebo in preventing restenosis of dialysis access after PTA. Patients undergoing maintenance hemodialysis were eligible for this study if they have had successful PTA at outflow veins of failing (but not failed) arteriovenous fistulas or grafts that have been created for at least 6 months. PTA was indicated based on the following criteria: (1) clinical signs, i.e., decreased thrill, development of collateral veins, limb swelling, and prolonged bleeding from puncture sites, suggesting vascular access dysfunction; (2)\u2009>\u200925% reduction of flow rate from baseline; (3) total access blood flow rate of <500\u2009mAfter the completion of the first hemodialysis session following successful index PTA, patients were randomly assigned into one of the three regimens: normal saline (placebo), 300\u2009mg AA, and 600\u2009mg AA. The doses of AA were chosen according to previous studies on anti-oxidant effect of AA in hemodialysis patients. After each dialysis session, 20\u2009mL of 0.9% saline or sodium ascorbate at a dose of 300\u2009mg or 600\u2009mg was administered intravenously for 5\u2009min, three times per week for 12 weeks. The treatment order was block-randomized using computer-generated numbers by the study nurse in our dialysis center. The patients, nephrologists, and interventionists were blinded to the study regiments.2. Stenosis was treated using a standard balloon angioplasty technique as previously described. High-pressure or cutting balloon was used only for lesions resistant to conventional balloon6. Drug-eluting balloon, stent or stent grafts were not used in this study. After the PTA procedure, antiplatelet therapy with aspirin or clopidogrel was administered for 3 days for all patients. Maintenance antiplatelet agents or other medications were added or continued according to the operators\u2019 discretion or patients\u2019 original indications.Diagnostic angiography was performed on a mid-week non-dialysis day. After diagnostic angiography, PTA was performed based on the National Kidney Foundation-Disease Outcomes Quality Initiative (NKF-DOQI) guidelines, i.e., only for patients with clinical indicators of dysfunction and a minimum of 50% diameter stenosisA computer-based system was used for quantitative angiographic analysis. Measurement was performed by a physician who was blinded to the study information. The reference diameter (RD) was defined as an adjacent segment of normal vein located upstream to the target lesion. The means of the luminal diameter were used in patients who were undergoing PTA for more than one lesion. The absolute value of the minimal vessel diameter (MLD) was measured, and the degree of stenosis (DS) was reported as the maximum diameter reduction compared with the reference vessel diameter. The following parameters were derived: acute gain\u2009=\u2009MLD (DS) after PTA minus MLD (DS) before PTA and late loss\u2009=\u2009MLD (DS) after PTA minus MLD (DS) at follow-up.After successful PTA, all patients underwent prospective clinical follow-up evaluations for 12 weeks. Clinical follow-up surveillance included physical examination and dynamic venous pressure monitoring at each hemodialysis session and transonic examination of access blood flow rate immediately after the intervention and monthly until the end of 3-month period. The patients were referred for fistulography and PTA as appropriate if abnormal clinical or hemodynamic parameters were detected. All participants without clinical restenosis at the previously dilated area were scheduled for a follow-up angiography at the end of the study. The angiogram obtained before PTA was used as the follow-up angiogram if event-driven PTA was performed before the end of the study.22.The primary endpoint was the severity of restenosis, defined as the late loss of MLD or percentage stenosis at angiograms obtained before re-intervention or on follow-up angiograms obtained at the end of the study if no re-intervention was needed. Restenosis was also defined as a binary variable, i.e., >50% diameter stenosis at the target lesion. The post-interventional primary patency of the target lesion was defined as the interval between intervention and the next access thrombosis or repeated intervention at the previously treated area within 3 months. The post-interventional primary patency of the access circuit was defined as the interval between intervention and the next thrombosis or repeated intervention at anywhere from the arteriovenous junction to the central vein of the access circuit. These definitions were in accordance with the guidelines of the Society of Interventional Radiologyt-test and Mann-Whitney test for normally and non-normally distributed data. Categorical data were compared using the chi-square test with Yates\u2019 correction and Fisher\u2019s exact test as appropriate. Post-interventional patency was analyzed using the Kaplan\u2013Meier method, and between-group comparisons were using the long-rank test. A two-tailed P value of <0.05 was considered statistically significant. Statistical analysis was conducted using SPSS, version 20.0 .A previous study using AA to prevent restenosis of coronary arteries had shown a mean 30% reduction in late loss and 25% reduction in restenosis severity. We calculated that 228 patients were needed to detect a 30% reduction of late loss with a power and two-tailed significance level of 0.80 and 0.05, respectively. Because the number of patients needed to detect a difference in restenosis severity was more than the patients in our dialysis unit, we designed a feasibility study to explore the effect of AA and the adequate dose for hemodialysis vascular accesses. We enrolled 30 patients after PTA in each of the placebo, 300\u2009mg AA, and 600\u2009mg AA groups. A post-hoc power calculation will be performed based on the data from this study. Continuous variables were expressed as means for normally distributed data or medians for non-normally distributed data, and proportions for categorical data. Differences between the AA and the placebo groups were compared by using the Ninety-three patients were randomized into three groups from April to October, 2011, with 31 patients were in each group patients underwent re-interventions. Table\u00a0Angiographic follow-up included angiographies before the target-lesion PTA in 38 patients with symptomatic restenosis and angiographies at the end of the study in another 43 asymptomatic patients. Eight patients refused angiographic follow-up at the end of the study. Table\u00a0To date, no pharmacological strategy has been proved to prevent restenosis after PTA of dialysis vascular access. Data from our study provided the first evidence showing that AA therapy may attenuate the severity of restenosis. Compared with placebo, administration of 600\u2009mg AA after each dialysis session for 3 months decreased late luminal loss by 50%. Furthermore, the restenosis rate of the target lesion also decreased from 41% to 24%. According to the post-hoc statistical power calculation, our number of patients were not powered to detect a difference in primary patency rate of the access circuit (power\u2009=\u20090.59) or the binary restenosis of target lesions (power\u2009=\u20090.49). Nonetheless, a significant decrease in the severity or restenosis was obtained in the 600\u2009mg AA group, with a statistical power of 0.92. Our study provided a proof of feasibility that intravenous AA could be used after PTA and was potentially effective in preventing restenosis for dialysis vascular access. Furthermore, the preliminary data also provided a basis for power calculation in future pivotal trials. According to the difference in our data, a sample size of 240 patients in each of placebo and 600\u2009mg AA group will have a power of 0.80 to detect a benefit on patency of dialysis access circuit.et al., 1200 IU of vitamin E per day showed a trend toward reduction of restenosis for patients undergoing coronary angioplasty23. In the study by Tardif et al., patients administered with probucol had a 68% reduction of late luminal loss15. In another study by Yokoi et al., probucol reduced the late loss of luminal diameter and percentage stenosis by 38% and 61%, respectively17. In another study of coronary angioplasty, administration of AA 500\u2009mg per day resulted in a 32% and 43% reduction in the late luminal loss and binary restenosis rate, respectively16. The attenuating effect in our study was consistent with that in previous reports, supporting a favorable effect of antioxidants in inhibiting intimal hyperplasia, both for arterial and venous diseases.In this study, both clinical and angiographic effects were assessed to delineate the benefit of AA. According to the quantitative angiographic analysis, the 600\u2009mg group had a 49% reduction of late loss by luminal diameter (1.59\u2009mm vs. 3.15\u2009mm) and a 51% reduction of late loss by percentage stenosis (23% vs. 47%) compared with the placebo group. The attenuating effect was less prominent in the 300\u2009mg group, and either the improvement of late loss by diameter (20%) or percentage stenosis (23%) did not achieve statistical significance. Previous studies found a similar efficacy of antioxidants against post-angioplasty restenosis after coronary interventions. In the study by DeMaio 25. Furthermore, 68% of our patients had prosthetic access, and 86% had restenotic lesions. The unfavorable characteristics of the accesses may account for the low patency rate in our study. Finally, our study used a formal angiography follow-up was used at the end of the study for asymptomatic patients. Angiographic restenosis was 54% and 79% in the 600\u2009mg AA and placebo groups, respectively, which was higher than and similar to that in other studies with angiographic follow-up, respectively5.Despite the attenuating effect on late luminal loss, the reduction in angiographic restenosis did not translate into reduction in re-interventions. Both symptomatic restenosis of the whole access circuit (38% vs. 57%) or target lesions (24% vs. 41%) were lower in the 600\u2009mg AA group than in the placebo group. Nonetheless, the reduction in restenosis rate did not reach statistical significance. The patency rate in our study was lower than that in previous reports with mostly retrospectively ascertained data. The patency rate has been well known to be lower when it was prospectively assessed than retrospectively ascertained16. The small sample size of our study may limit our ability to find a clinically significant benefit. Furthermore, as demonstrated in our study, the restenosis rate of dialysis access was very fast and extensive5. In our study, up to 67% of the patients with experienced symptomatic restenosis within 3 months, and the restenosis rate may be up to 79% when a routine angiographic follow-up was performed. Such a rapid speed of restenosis and a high proportion of restenosis may offset the potential benefit of AA. Furthermore, various causes may be responsible for restenosis of dialysis access, in addition to inflammation and oxidative stress9. Antioxidant therapy could reduce oxidative stress and inflammation but did not seem to be helpful in other pathogenic mechanisms, such as hemodynamic stress, hemostasis, and needle injury.A previous study had shown a reduction of target lesion restenosis rate from 39% to 22% after the administration of AA for 4 months18. Many literatures documented 100\u2013200\u2009mg/day oral AA or 300\u2013500\u2009mg thrice weekly intravenous AA are sufficient and safe for hemodialysis patients26. For using AA to decrease oxidative stress, the optimal dose, route, and duration of administration are controversial. In studies showing that AA decreased oxidative stress, 250\u2009mg/day orally for 12 weeks, 1\u2009g/day orally for 1 year, intravenous doses of 300\u2009mg/day intravenously for 8 weeks, and 1\u2009g/day intravenously for 2 months had been administered29. Some studies showed no change in oxidative stress when a single intravenous dose or a daily dose of 250\u2009mg for 4\u201312 weeks was used32. Because of these controversial results in previous studies, we tested two doses of AA: 300\u2009mg and 600\u2009mg AA. AA was administered intravenously immediately after each dialysis session to enhance compliance. Although no statistically significant difference in binary stenosis was observed, a dose-dependent decrease in the extent of restenosis, assessed according to late luminal loss, was found between 600\u2009mg and 300\u2009mg compared with the placebo group. Further studies may be warranted to determine if a higher dose or a pre-treatment strategy could achieve a prominent effect.Hemodialysis patients have been well known to have a lower plasma AA level than the normal population33. In hemodialysis patients, uraemia and dialysis are associated with increased oxidative stress that causes activation of inflammation, release of free radicals, and depletion of protective antioxidant. Recent studies have shown that pro-inflammatory chemokines and oxidative stress markers are implicated in venous intimal hyperplasia of dialysis vascular access34. Experimental studies also demonstrated that AA reduced oxidant stress levels, improved endothelial function, and decrease expression of vascular adhesion molecules, growth factors, and chemokines that may play an important role in the neointimal formation37. In addition, AA may also be able to change the ratio of prostacyclin to thromboxane and to reduce platelet aggregation and vasoconstriction38. These beneficial effects at the cellular levels have been translated in animal models. One study in pig model reported that a combination of vitamins C and E decreased intimal thickening after angioplasty. In another study in rats, vitamin C intake decreased the degree of atherosclerosis formation. In humans, a combination therapy of vitamins C and E decreased the intimal index in patients who underwent cardiac transplantation. Another study in non-uremic patients showed a possible effect of AA in attenuating restenosis after angioplasty of coronary arteries. Our study provided the first evidence showing that AA also attenuated intimal hyperplasia at outflow veins of dialysis access.There are several mechanisms by which AA treatment could attenuate restenosis after PTA. Damaged endothelium, activated platelets, and neutrophils at the angioplasty site will generate reactive intermediates. These oxidative metabolites can induce endothelial dysfunction and activate macrophage, which in turn, can release several growth factors that promote tissue proliferation40. Although all patients tolerated the regimens well, we did not measure the plasma level of oxalate before and after study. Although the variation of vitamin C from dietary intake or multivitamin supplementation was small, it was not controlled in this study and might dull the effect of AA. Early re-intervention may be secondary to elastic recoil, not necessarily intimal hyperplasia. Nonetheless, the interference of recoil should be minimized by subtraction of post-PTA stenosis in the calculation of late loss. Most of these lesions were restenotic lesions and the effect of AA on primary stenosis still needs to be determined. We did not measure the plasma AA level because plasma AA levels and pharmacokinetic data of intravenous AA administration have been reported in previous literatures42. The drop-out rate in the 600\u2009mg AA group was higher than that in other groups because fewer patients in this group experienced recurrent vascular access dysfunction. Finally, the study cohort is composed of two third of patients on arteriovenous grafts. A high proportion of graft accesses may limit the applicability of this study because native fistulas are more commonly used in hemodialysis patients.This study has limitations. Dosages of AA as high as 500 to 1000\u2009mg/day for 3 or >3 weeks may significantly induce increased plasma oxalate levelsIn conclusion, our results showed that intravenous administration of AA at a dose of 600\u2009mg three times per week attenuated the severity of restenosis after PTA. Currently, devices, such as stent graft and drug-eluting balloon, had been used or under investigation to prolong the durability of PTA. Nonetheless, the absolute increase in the patency attributable to the use of expensive devices was still limited. Although a clinically significant improvement in patency could not be achieved, therapy with AA was safe and inexpensive. A multi-disciplinary strategy to prolong patency by combining AA therapy with use of mechanical devices deserved further investigation."} +{"text": "Lots of previous reports have suggested a potential association of atopic dermatitis (AD) with stroke and myocardial infarction (MI). However, the result is still controversial, Consequently, we conducted this meta-analysis to estimate the relationship of AD with Stroke and MI.PubMed, Embase, and Web of Science databases were searched from inception to June 2018. Stroke and MI were considered as a composite endpoint. We calculated pooled hazard ratios (HRs) with 95% confidence intervals (CIs). Subgroup and sensitivity analysis were performed to assess the potential sources of heterogeneity of the pooled estimation.P\u200a=\u200a.000) and MI , compared with participants without AD. The risk of stroke and MI was significant both in male subjects , but not in female subjects . The results were more pronounced for ischemic stroke in the stratified with stroke type. Stratifying by AD type, the risk of stroke was significant in severe AD and moderate AD for MI.A total of 12 articles with 15 studies involving 3,701,199 participants were included in this meta-analysis. Of these, 14 studies on stroke and 12 on MI. Pooled analysis showed participants with AD experienced a significant increased risk of stroke (combined HR, 1.15; 95% CI, 1.08\u20131.22; AD is independently associated with an increased risk of stroke and MI, especially in male subjects and ischemic stroke and the risk is associated with the severity of AD. It has become the leading non-fatal burden to health attributable to skin diseases, and may increase the risk of the immune-mediated inflammatory diseases such as asthma and allergic rhinitis. This creates a major public health issue.Atopic dermatitis (AD) or atopic eczema is a chronic relapsing inflammatory skin disease which is characterized by drastic pruritus and eczema affecting both children and adults. Stroke and ischemic heart disease reminds a leading cause of death and a major cause of adult disability worldwide. Traditional factors such as hypertension, diabetes, hypercholesterolemia, cigarette smoking, and low level of physical activity increase the risk of stroke and myocardial infarction (MI).,7 Nowadays, it is generally accepted that atherosclerosis is a chronic systemic inflammation disease, which is the important reason of several serious adverse events, including coronary artery disease, MI, stroke, and peripheral artery disease. And research shows that many skin disorders are closely associated with stroke and MI.\u201312 It leads to a hypothesis that patients with AD may have an increase risk of stroke and MI, similar to inflammatory skin disorders.As everyone knows, cardiovascular disease (CVD) has become a major public health issue and is responsible for approximately 30% of all deaths worldwide.\u201324 However, the results are inconsistent. Some of these researches suggested that AD may increase the risk of stroke and/or MI incidence.,19,21,24 While several studies found that AD was not independently associated with increased risk of stroke and MI.,16,18 Furthermore, the different results of various studies showed that stroke type, sex, and severity of AD may affect the risk of stroke and/or MI independently.,19,21,22 Given these inconsistent results, to obtain a more comprehensive estimate of the putative influence of AD on stroke and MI, we conducted a meta-analysis to assess the association of AD with stroke and MI risk.During the past decade, research has suggested that AD is an allergic disease in which systemic inflammation involves more than just the skin, lots of epidemiologic observational studies have investigated the associations between AD and future stroke and/or MI.22.1As all analyses in our article were based on previously published studies, no ethical approval or patient consent was required.2.2 A systematic search of PubMed, EMBASE, and Web of Science databases was performed up to June 2018. The following key words were used in our search: \u201catopic dermatitis,\u201d \u201catopic eczema,\u201d \u201cdermatitis\u201d or \u201ceczema\u201d and \u201cstroke,\u201d \u201ccerebrovascular diseases\u201d or \u201ccerebrovascular disorders,\u201d and \u201cmyocardial infarction,\u201d \u201ccoronary heart disease,\u201d \u201cischemic heart disease.\u201d Studies without language and race restrictions were included to avoid publication bias. Only human studies were included. Additionally, more articles were detected through a manual search of the references from retrieved publications and recent reviews. When there were several included studies from the same or similar participants data source, only the most recent study was chosen.The search was conducted according to the recommendations of Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) Statement.2.3The following criteria were used in order to include the eligible studies: the study of adult patients had a cohort studies, case-control or cross-sectional design; AD was the exposure, stroke and/or MI was the outcome measure. Participants were free of stroke or MI at study entry; studies must report quantitative estimates of multivariate-adjusted odd ratio (OR) or relative risk (RR) or hazard ratio (HR) with 95% confidence interval (CI) for stroke and/or MI incidence or provided data for their calculation.2.4 The quality of the cohort study and case-control study was evaluated by the following 3 major components: the selection of the participants, the comparability between the groups and the ascertainment of the exposure. The modified NOS conducted by Herzog et al was used for cross-sectional study. Scores of 1 to 3, 4 to 6, and 7 to 9 were considered to be poor-, fair-, and good-quality studies, respectively.Two authors independently extracted all data using a standardized data collection form. The discrepancies in data extraction were dealt with by consensus. We extracted the following data from each study: the first author's last name, year of publication, design of study, location of the participants, data source, number of participants, follow-up, exposure assessment, outcome assessment, and adjusted covariates. The Newcastle-Ottawa Scale (NOS) was used to evaluate the quality of the studies.2.5The odd ratios (OR) and relative risk (RR) were considered equivalent to HRs. Stata 12 was used to estimate the pooled HR and 95% CI of the association between AD with stroke and MI on the basis of a random-effects model meta-analysis. When a study reported the risk estimates adjusted for different covariates, we used the most full adjusted models in the analysis of the pooled HR. We converted HR and 95% CI into Napierian logarithms, and then we calculated the standard error (SE) from these logarithmic numbers and corresponding 95% CI.Q test statistic was used to examine the null hypothesis that the retrieved publications were evaluating the same effect at P\u200a<\u200a.10 level of significance, and I2 test statistic was used to quantify heterogeneity, which grades the inconsistency in these studies\u2019 results. I2 values of 25, 50, and 75% was regarded as low, moderate, and high, respectively. Potential publication bias was assessed by the symmetry of the funnel plot, as well as the Egger test, and Begg test.,30Two methods were used to measure heterogeneity of HR across studies in this meta-analysis. The chi-square test based on Cochran P values were 2-sided and P\u200a<\u200a.05 was regarded as statistically significant.If there was evidence of heterogeneity, subgroup analyses and sensitivity analyses were employed to explain what contributed to the heterogeneity. Subgroup analyses based on adjusted HRs were conducted according to study type , sex , geographic area (Asia vs non-Asia), AD type , and stroke type (ischemic stroke vs hemorrhagic stroke). A sensitivity analysis was performed by removing each individual study from the meta-analysis to assess the possible reason for heterogeneity. In all statistic analysis, 33.1\u201324 The results of the study selection process are shown in Fig. A total of 629 articles were retrieved from the initial PubMed, Embase, Web of Science electronic database search. Of which 608 articles were excluded after the first screening based on titles and abstracts, leaving 21 articles for full-text review. Of these, 9 articles were excluded because data were not available in 2 publications, the articles were review in 4 publications, and the patients without AD in 2 publications, the outcome of interest was not stroke or MI in 1 publications. Finally, 12 articles were included in our meta-analysis. no specific follow-up time was indicated. Among these, 1 article had 2 studies, and 1 article had 3 studies. One study was conducted in the UK, 4 in Asia country (Taiwan),\u201321,24 3 in Demark,,18,23 4 in United States,,17 2 in Germany, and 1 in Canada. Among these, 5 were cross-sectional studies,,14,17 10 were cohort studies.\u201316,18\u201324 Three focused on stroke,,21,24 1 focused on MI and 11 on both.,20,22,23 All studies provided adjusted risk estimates, overall quality scores ranged from 7 to 9, and all studies were graded as good quality according to the Newcastle\u2013Ottawa Quality Assessment Scale.The characteristics of the 12 retrieved articles with 15 studies were presented in Table 3.2P\u200a=\u200a.000). There was evidence of moderate heterogeneity in the magnitude of the association across studies . There was no evidence of publication bias by inspection of the funnel plot .The multivariable adjusted HRs of stroke in relation to AD from individual studies and the combined HR are presented in Fig. 3.3P\u200a=\u200a.014). There was evidence of modest heterogeneity in the magnitude of the association across studies . Review of funnel plots could not eliminate the potential for publication bias for MI .The multivariable adjusted HRs of MI in relation to AD from individual studies and the combined HR are presented in Fig. MI Fig. B. While,3.4The subgroup analysis is shown in Table 3.5Sensitivity analysis was used to evaluate potential sources of heterogeneity in the association between the studies. To clear which one of the retrieved studies might influence the results, we assessed the risk estimates for the remainder of the studies by deleting 1 study at each turn. The results of the meta-analysis remained largely unchanged, indicating that the results of the present meta-analysis were stable .4In this meta-analysis of 15 studies involving 3,701,199 participants, we confirmed that AD was associated with an increased risk of stroke and MI. And the risk was significant in male subjects, but not in female subjects. In addition, results indicated that this risk was more common in ischemic stroke and was associated with the severity of AD. The pathogenesis of AD is attributed mostly to immune system abnormalities and hyperactivity, and mutation of several genes has been involved in the immune response, with key roles played by T-helper 2 (Th2) cell dysregulation and immunoglobulin E (IgE) production. In recent decades, overwhelming evidence has indicated that AD is associated with well-known risk factors for cardiovascular disease include hypertension, old age, diabetes mellitus, hyperlipidemia, and so on, while many of these are also associated with the risk of stroke and MI.\u201336 There is also some information that AD may have a direct role in the development of stroke and MI.,17,19,22 In addition, some believe that systemic inflammation associated with AD may increase the risk of cardiovascular and cerebrovascular diseases, similar to the increased risk of psoriasis, another chronic inflammatory skin condition.,38 However, the potential mechanisms by which AD is independently associated with an increased of stroke and MI remains ambiguous. We speculate that patients suffering from AD may have an increased risk of cardiovascular risk similar to psoriasis which has been confirmed as an independent cardiovascular risk factor. However, the inflammatory pathways of AD is differ with psoriasis, that AD is mostly mediated by Th2, and psoriasis by Th1 and Th17 cytokines. There are several possible explanations for the high risk of stroke and MI. First, AD was associated with increased blood platelet activation, which suggested that activated platelets play a role in the pathomechanism of AD. Chronic inflammatory changes of the vascular wall induced by platelet result in development of atherothrombosis. Second, reduced fibrinolysis was detected in patients with AD, which are known to be associated with prothrombotic tendency. Third, oxidative stress is likely to be an important factor in the pathogenesis of AD. Fourth, previous studies showed that AD has been associated with several cardiovascular risk factors, including obesity, decreased physical activity, increased smoking and alcohol assumption, hypertension, hyperlipidemia, and diabetes.,45 These mechanisms suggests that AD may be associated with a systemic effect which can lead to stroke and MI.AD is the most common chronic relapsing and inflammatory skin disease and its pathophysiologically characterized by abnormalities of epidermal barrier function and T-cell-driven cutaneous inflammation.,19,21,24 while 3 studies failed to find such an association.,16,18 A systematic review of AD and the risk of cardiovascular disease and type 2 diabetes in adults showed no association between AD and hypertension, presumed type 2 diabetes, myocardial infarction, and stroke in the quantitative crude data analyses and fully adjusted data analyses. But most studies were cross-sectional in this systematic review and information about sex, AD severity, and study type were lacking. In the present meta-analysis, we found that the risk of stroke and MI was higher in patients with AD and the male subjects. And similar analyses using fully cohort studies gave essentially identical results. This result is consistent with previous population-based cohort studies.,19\u201321 As we all known, men have been more frequently subjected to the negative influence factors, such as smoking, drinking, and other unhealthy behaviors, which may explain why male AD patients appear to have a higher risk of stroke and/or MI than female patients. Furthermore, results indicated that this risk was more common in ischemic stroke and was associated with the severity of AD, which was consistent with the conclusions of some large sample studies.\u201321,24 To our knowledge, stroke is divided into hemorrhagic stroke and ischemic stroke, although there are many vascular risk factors among the 2 major branches, but its pathophysiological basis is different, reflecting different pathogenesis. There is growing evidence to suggest that systemic inflammation can promote the progression of atherosclerosis and thrombosis to ischemic stroke,,48 therefor, changes in atherosclerosis and activation of the coagulation system related to chronic inflammation may be one of the reasons for high risk of ischemic stroke in AD patients.Over the past few decades, although the role of AD in cardiovascular or cerebrovascular diseases has been studied in previous clinical studies, it is not clear whether there is a causal link between AD with the risk of stroke and MI. Some studies suggested that AD is associated with an increased risk of stroke and/or MI incidence,Moderate heterogeneity across studies was observed, which did not alter much in the subgroup analyses and sensitivity analysis. Heterogeneity might have come from several sources, such as variations in characteristics of study populations, study designs, follow-up length, and adjustment for confounding factors. In the present meta-analysis, the sample size of each study, incomplete matching, country of origin, study type, and sex differences should be the main source of heterogeneity.Several potential limitations of the current meta-analysis should be acknowledged. First, there were different clinical designs in the included studies. Most of them were cohort studies, and 3 of them were cross-sectional studies. In these included studies, history of AD was mostly self-reported and not performed clinical evaluation or affirmed through any diagnostic testing, which may lead to recall bias. Second, confounding factors for the risk of stroke or MI were not adjusted consistently in these included studies. Medications use and history of asthma or other allergic diseases may affect the association between AD with stroke and/or MI risk. Other confounders like socioeconomic status, race, and lifestyle may also influence risk of stroke and/or MI. Third, the number of studies included in the subgroup was too small. Furthermore, the assessment standards of AD and stroke or MI in the included studies were different. Finally, language bias may be another possible limitation, though, we attempted to minimize this bias by searching 3 major electronic databases with no language restriction, while, some articles published in Chinese or other non-English languages may not appear in the international journal database, resulting in incomplete search.In conclusion, the results from this meta-analysis provide new evidence that AD is independently associated with an increased risk of stroke and MI after adjustment for established cardiovascular risk factors. Future studies on the effect of AD treatment and modifiable risk factor reduction on stroke and MI risk in AD patients are warranted.Min Yuan and Xu-Fang Xie conceived and designed the study. Min Yuan and Wen-Feng Cao searched the databases and checked these according to the eligible criteria and exclusion criteria. Xu-Fang Xie helped develop search strategies. Xiao-Mu Wu and Huang-Yan Zhou extracted the quantitative data. Min Yuan and Wen-Feng Cao analyzed the data. Min Yuan wrote the draft of the paper. All authors contributed in writing, reviewing, or revising the paper. Min Yuan and Xu-Fang Xie were the guarantors."} +{"text": "Individuals with substance use disorders exhibit maladaptive decision-making on the Iowa Gambling Task (IGT), which involves selecting from card decks differing in the magnitudes of rewards, and the frequency and magnitude of losses. We investigated whether baseline IGT performance could predict responses to contingency management (CM) by treatment-seeking individuals with methamphetamine use disorder (MA Use Disorder) in Cape Town, South Africa.Twenty-nine individuals with MA Use Disorder underwent an 8-week, escalating reinforcement, voucher-based CM treatment in a study on the suitability of CM therapy for the South African context. Along with 20 healthy control participants, they performed a computerized version of the IGT before starting CM treatment. Seventeen participants maintained abstinence from methamphetamine throughout the trial (full responders), and 12 had an incomplete response . Performance on the IGT was scored for magnitude effect (selection of large immediate rewards with high long-term loss) and for frequency effect (preference for frequent rewards and avoidance of frequent losses). Group differences were investigated using linear mixed-effect modeling.p = 0.038, g = -0.77 (-1.09: -0.44)]. Full responders showed a greater, nonsignificant preference for frequent rewards and aversion to frequent losses than partial responders .Partial responders made more selections from decks providing large, immediate rewards and long-term losses than healthy controls [A predilection for choices based on the size and immediacy of reward may reflect a cognitive strategy that works against CM. Pretesting with a decision-making task, such as the IGT, may help in matching cognitive therapies to clients with MA Use Disorder. Substance misuse is linked to maladaptive risk taking that typically results in long-term loss or foregone gain in the context of uncertainty , 2. Suchimmediacy and magnitude of rewards (and losses) on decision-making, but it has also been used to investigate the impact of the frequency with which rewards and losses are presented . A behavioral treatment that rewards abstinence with rewards, often monetary, CM has greater short-term therapeutic efficacy than other treatments for MA Use Disorder, such as CBT . MaladapThis project is a key part of a pilot study to evaluate mechanisms of CM therapy for MA Use Disorder patients in Cape Town, South Africa. The objective of this trial is to examine links between maladaptive decision-making using the IGT with CM treatment outcomes and to compare IGT responses for the MA Use Disorder patients with a comparable sample of healthy controls. We hypothesized that participants with MA Use Disorder who failed to respond completely to 8 weeks of CM would demonstrate significant maladaptive decision-making at baseline relative to participants who responded completely (full response) or to healthy controls as measured by a \u201cmagnitude effect\u201d . We also predicted that compared to participants with complete CM response and healthy controls, participants who showed partial response to CM would demonstrate greater preference for frequent rewards and avoidance of frequent losses, that is, a \u201cfrequency effect.\u201dn = 20), and a combination of local newspaper advertisements and snowball sampling was used to recruit additional individuals with MA Use Disorder (n = 9) and all healthy control candidates (n = 20). Interested candidates provided informed, written consent and were screened for eligibility.This study was part of a pilot project investigating the suitability of CM in treating MA Use Disorder in South Africa. It used a between-groups, cross-sectional design comparing outcomes to CM among 29 individuals diagnosed with MA Use Disorder (DSM-5) to 20 healthy control participants, see Okafor et al. . All parScreening Tools and Inclusion/Exclusion Criteria, and a further 88 were excluded from the study due to nonattendance, which was the most common reason for exclusion. From the remaining 33 MA Use Disorder patients who were initially enrolled in CM treatment, four participants were additionally excluded from the CM trial for the following reasons: cocaine use not previously disclosed (n = 1), meningitis not previously disclosed (n = 1), brain structural abnormality (n = 1), and a MA-positive (methamphetamine-positive) urine test at the time of task assessment (n = 1). A total of 29 adult MA Use Disorder patients, 18\u201345 years of age, were enrolled in the study.Recruits with suspected MA Use Disorder underwent a 2-week baseline screening period to determine whether they met DSM-5 criteria for MA Use Disorder [Structured Clinical Interview for DSM-5 (SCID) verified by a trained professional], to demonstrate ability to attend thrice-weekly scheduled appointments to provide scheduled urine tests and to confirm recent methamphetamine use, where participants were not made aware of the latter eligibility criterion. Of 269 recruits who were initially screened, 148 individuals were not eligible based on either one of the exclusion criteria outlined under n = 12) and full responders (n = 17). Full responders were defined as those participants who exclusively presented with MA-negative (methamphetamine-negative) urine samples during CM treatment, demonstrating that they maintained abstinence. Partial responders were defined as those participants who presented with at least one MA-positive or missed urine sample over the entire duration of CM treatment. In addition to verifying methamphetamine use before initiating treatment, urine tests were used to verify treatment response, as well as to verify abstinence from methamphetamine on the day of task assessment, as well as several other substances, including barbiturates, cocaine, opiates, and cannabis, in order to prevent any confounding acute effects of drugs on task performance. If on the day of task assessment a participant presented with a positive urine test for any of the tested substances, the assessment was rescheduled. Participants were abstinent on average 4.2 days before testing on the first day of treatment. Over the CM intervention, partial responders presented with an average of 13.17 negative urine samples out of a total of 24 (sd = 6.35), where the remaining 45% represented positive (including missed) urine samples, and 22% of total samples represented missed urine samples. A frequency-matched control group of 20 participants who did not use substances of abuse, other than tobacco or occasional alcohol, was enrolled. Matching characteristics included age, education, gender, ethnicity, and broad intellectual function.Participants with MA Use Disorder were categorized according to their response to CM treatment as partial responders was accepted due to high prevalence of paired use of these substances with methamphetamine in Western Cape, South Africa . TobaccoMA Use Disorder patients underwent CM treatment, which required thrice-weekly scheduled clinic visits to provide urine samples, which were analyzed using radioimmunoassay strips to detect methamphetamine in urine over the prior 48\u201372 h. Integrity of urine test results was ensured by using supervised urine sample collection, which was further verified using temperature-sensitive strips on collection cups. Participants who provided MA-negative urine samples immediately received vouchers to be redeemed at a large supermarket (Pick n Pay). The value incrementally increased with each subsequent MA-negative urine test, demonstrating continued abstinence to a maximum of 4,850 Rand (USD $404) over the 8 weeks. If a MA-positive urine test was obtained, or if an appointment was missed with no attempt to reschedule the appointment to a future date that was within the number of days it takes to fully metabolize d-amphetamine, participants did not receive a voucher. The next MA-negative urine test following a positive was worth the starting 25 Rand. To sustain motivation, we used a \u201crapid reset\u201d procedure to return participants to their prior position on the CM schedule following three consecutive scheduled MA-negative urine tests.The Psychology Experiment Building Language (PEBL) 0.14 computerized version of the IGT was used . It consvia a desktop computer situated in a quiet, distraction-free room. Participants were instructed to select from four possible virtual decks on screen using a computer mouse, over a total of 100 trials, with the aim of maximizing net gains from the task. Participants were not time restricted, but took approximately 30\u201345 min to complete the task. Participants were provided with headphones during administration of the IGT in order to hear the sound effects associated with obtaining either a net gain or a net loss on each deck selection.For both MA Use Disorder and control groups, participants\u2019 vision was first tested using the Snellen chart before administering the IGT. The IGT was administered to participants The magnitude effect is represented by a greater selection of riskier decks A and B relative to decks C and D. This is indicative of both a greater preference for short-term rewards and an ability to withstand or otherwise lack the foresight of long-term associated losses. It is calculated by summing deck selections from disadvantageous decks and subtracting them from the sum of advantageous deck selections (C + D) \u2013 (A + B), with negative scores reflecting the magnitude effect. This net score was calculated for each of four blocks of 20 trials, excluding block 1 .The frequency effect is defined as a greater selection of decks B and D, relative to decks A and C, and demonstrates a preference for frequent short-term rewards and infrequent losses over infrequent rewards and frequent losses Table 1.In order to incentivize performance, a voucher with a flat rate value of 25 Rand, equivalent to USD $2, was offered to participants from both MA Use Disorder and control groups if an overall positive net payoff on the IGT was achieved.Utilizing the nlme package on the Rpost hoc contrasts were conducted on all LME models to compare the groups using Tukey\u2019s p-adjustment correction method, carried out with the lsmeans r package and frequency effect , and the inclusion of covariates did not improve the precision of estimates. In turn, covariates were excluded, and simpler models were retained. Additional demographics of interest are included in The majority of MA Use Disorder patients and matched controls were self-reported as \u201ccolored\u201d , where the term \u201ccolored\u201d loosely describes an ethnic group of persons of mixed European and African or Asian descent, who makes up a substantial proportion of the Western Cape population, where the study took place. A minority of both MA Use Disorder patients and healthy controls were self-reported as \u201cblack\u201d . The three groups did not differ in sex, age, or broad intellectual function but differed in years of education, employment, and household income Table 2;p < 0.001), suggesting the impact that individual variability plays in estimating the magnitude effect. Group contrasts from the LME magnitude effect model demonstrated a significant difference between partial responders and healthy controls in magnitude effect, with a large effect size. More specifically, partial responders favored decks tied to large, short-term reward and withstood long-term loss more than healthy controls . On the LME frequency effect model, a group difference in frequency effect was exhibited between full responders [mean (SE) = 5.18 (1.11)] and partial responders [mean (SE) = 1.04 (1.33)], where full responders demonstrated a greater tendency than partial responders to favor frequent rewards and avoid frequent losses with respect to the magnitude effect, which is in contrast to previous studies of IGT performance by healthy individuals in particular, where healthy individuals were found to be able to obtain net positive gains on the IGT , 3. ThisDecision-making among healthy samples is also influenced by the frequency with which rewards and losses are presented , 11. In Baseline IGT performance differences associated with response to CM indicate that an individual\u2019s cognitive strategy for balancing reward and potential loss can be an important factor to consider in deciding whether CM is the best treatment for a particular client. The very nature of CM, which involves forgoing immediate gain (from drug use) for a greater long-term gain (vouchers for abstinence), is consistent with greater therapeutic success of clients who can avoid immediate, large rewards that carry the risk of long-term loss. The findings also point to the influence of the frequency with which such decision alternatives arise. Future work confirming links between maladaptive decision-making and outcomes of CM treatment for MA Use Disorder might offer quick, affordable methods to separate persons most likely to fully respond from those who respond relatively less so to CM.There are several limitations in this study. Sample size was small, but hypothesized meaningful findings were still obtained, and so was sufficient in size to test hypotheses. Groups were not perfectly matched against all potentially relevant sociodemographic, cognitive, and drug-use factors that may covary with performance, and models were run in absence of any covariates, which could lead to under- or overestimation of model estimates in small samples. Steps were taken to increase the precision of model estimates with use of LME models, which account for potential confounding effects of individual differences in performance. Moreover, groups were not examined on executive functioning capabilities, which have been strongly tied to performance on IGT , 5. As sPartial responders to CM exhibited maladaptive decision-making as compared with healthy controls, reflected by the favoring of large, immediate rewards over long-term gains\u2014the magnitude effect. Partial responders and full responders also appeared to differ in frequency effect, where full responders demonstrated a greater preference for frequent rewards and avoided frequent losses more than partial responders. Evidence of group differences in magnitude effect and frequency effect suggests a difference in decision-making profiles, with different associated implications for treatment response on CM. In particular, the finding that the magnitude effect was more linked to lowered response to CM whereas the frequency effect was associated with positive response suggests that the magnitude effect is a risk factor for relapse during CM treatment, whereas the frequency effect may act as a cognitive strategy that predicts greater CM treatment success in the form of sustained abstinence.The datasets generated for this study are available on request to the corresponding author.The studies involving human participants were reviewed and approved by Health Sciences Human Research Ethics committee of the University of Cape Town and UCLA Institutional Review Board. The patients/participants provided their written informed consent to participate in this study.ML conceived the study focus with support from JI. SS conceived the broader study design with additional contributions from LN, EL, and DS. LN project managed the broader study and was in charge of data acquisition with assistance from ML. ML conducted analysis of data. ML interpreted findings of data with assistance from JI. ML wrote up the paper; revisions were obtained from all authors, with the biggest contributions from EL, ST, and SS. Final approval of the manuscript was obtained from all authors.This work was supported by the National Institute on Drug Abuse R21DA040492-01 [FAIN No. R21DA040492] and the Department of Psychiatry and Mental Health, University of Cape Town. ST also acknowledges salary support by the VA Office of Academic Affiliations through the National Clinician Scholars Program. SS acknowledges salary support by the National Institute of Mental Health P30 058107\u2014Center for HIV Identification, Prevention and Treatment Services, University of California, Los Angeles, Center for AIDS Research grant AI028697. DJS acknowledges salary support by the South African Medical Research Council.The contents do not represent the views of the South African government, U.S. Department of Veterans Affairs or the United States Government.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Diffuse intrinsic pontine gliomas (DIPGs) are a pontine subtype of diffuse midline gliomas (DMGs), primary central nervous system (CNS) tumors of childhood that carry a terrible prognosis. Because of the highly infiltrative growth pattern and the anatomical position, cytoreductive surgery is not an option. An initial response to radiation therapy is invariably followed by recurrence; mortality occurs approximately 11 months after diagnosis. The development of novel therapeutics with great preclinical promise has been hindered by the tightly regulated blood\u2013brain barrier (BBB), which segregates the tumor comportment from the systemic circulation. One possible solution to this obstacle is the use of convection enhanced delivery (CED), a local delivery strategy that bypasses the BBB by direct infusion into the tumor through a small caliber cannula. We have recently shown CED to be safe in children with DIPG (NCT01502917). In this review, we discuss our experience with CED, its advantages, and technical advancements that are occurring in the field. We also highlight hurdles that will likely need to be overcome in demonstrating clinical benefit with this therapeutic strategy. Pediatric diffuse midline gliomas (DMGs) are tumors of childhood with universally poor prognoses; despite accounting for only 10\u201320% of all pediatric central nervous system (CNS) malignancies, they are the most prevalent cause of death due to brain cancer in children ,4. Such The challenges to the successful treatment of DIPG are numerous. Firstly, cytoreductive surgery, an important positive predictor in most primary CNS neoplasms, is not possible, owing to highly infiltrative growth patterns and the anatomical position within the pontine segment of the brain stem. Historically, biopsies were thought to be of little benefit and have high morbidity . This re2, 40 mg/m2, 40 mg/m2, and 20 mg/kg/d for each drug, respectively. The authors found that median survival was longer than their historical control ; there was no difference, however, when the time from RDT was considered, meaning that the benefit observed was due to the chemotherapy-mediated delay in disease progression. The authors, however, also note the common development of grade III and IV complications and an overall longer length of hospital stay for patients on chemotherapy. The development of significant toxicities excluded this regimen from current clinical practice.Conventional and targeted therapeutic agents have hadTargeted therapies in the treatment of DIPG have, thus far, met a similarly unsuccessful fate riddled with poor responses and significant toxicities. Tamoxifen (estrogen-receptor modifier) ,32, bevaGiven the paucity of tissue, and the overall low disease incidence, the rise of collaborative efforts aimed at changing disease prognosis has been one of the few positive notes of recent decades. Starting in the 2000s, not-for-profit foundations have been sponsoring DIPG research, resulting in the foundation of the DIPG Collaborative and its affiliated DIPG Registry\u2014a comprehensive database of clinical, radiological, pathologic, and molecular data ,40. ThesThese concerted efforts have allowed for the discovery of important novel DIPG features that, hopefully, will yield accessible therapeutic targets. For instance, it was determined that DIPG has an erratic epigenetic profile, with a majority of samples harboring a H3K27M mutation which, by altering histone proteins, impairs polycomb repressive complex 2 (PRC2) methyltransferase, leading to a global hypomethylation of H3K27 . Albeit One of the main issues with systemic chemotherapeutics is the inaccessibility of the tumor compartment via traditional, systemic routes because of the tightly regulated blood\u2013brain barrier (BBB) and the blood-tumor barrier (BTB). The BBB impedes drug CNS penetration via endothelial tight junctions . TraditiConvection-enhanced delivery (CED) is a technique that relies on direct cannula implantation into the brain or tumor for delivery of an infusate through a pressure gradient and could overcome these issues by locally delivering high drug concentration while minimizing systemic exposure and hence toxicity 49]. Th. Th49]. D) is superior to the pressure of tissue (PT), delivery will continue at high concentrations. On the other hand, when PD < PT, a steep drop in concentration will occur. The almost-constant concentration where PD > PT allows for the even permeation of homogeneous tissue and guarantees a more favorable spatial profile, i.e., allows for coverage of a greater volume at high concentrations than simple diffusion, which decays exponentially.CED has numerous other advantages when compared to systemic delivery of a drug. For instance, CED relies on a pressure gradient to deliver its infusate, whereby so long as the pressure of delivery , known to be directly proportional to catheter gauge. Multiport cannulas reduce the turbulent flow observed at the end of end-port cannulas, where high velocities can reduce overall volume of distribution, with flow being turbulent rather than linear. Porous tipped catheter holes) increase the distribution of infusate into the surrounding gel substrate and murine brain tissue. Balloon tipped catheters, albeit rarely used for CED, have been employed in the treatment of post-resection cavities for maximal permeation of the tumor penumbra . NumerouCatheter backflow is another significant hindrance to CED . The neeDIPG is a logical tumor prototype for using a CED therapeutic backbone for several reasons. Relative to other infiltrative gliomas, DIPG is more constrained within a limited anatomical compartment, meaning that representative drug distribution should be more easily achieved. The avoidance of cytoreductive surgical tumor removal leaves the tumor milieu without gross inhomogeneities, a desirable feature for CED. Metastatic tumor dissemination is known to occur in DIPG but typically not early in the disease continuum. Lastly, the urgent need for innovative therapeutic strategies for a universally fatal disease provides a lesser threshold for regulatory approval for unconventional strategies like CED. Balancing these appealing features early on were the predictable intolerance of the brain stem to stressors, the use of an otherwise unnecessary surgical procedure, and an uncertain clinical risk profile. To date, a sample of clinical studies have demonstrated the safety and feasibility of this approach via injection of a variety of agents ; however, all these studies had great variability in technique, infusate, and hardware used, making it difficult to draw unifying conclusions about their efficacy and on the role of each independent variable assessed ,59,60,61Given the variability in technique and hence the lack of existing data to define safe or meaningful parameters related to CED in the brain stem, we designed a Phase 1 clinical trial (NCT01502917) that would for the first time use an iterative dose, volume, and rate escalation design. Children with a clinical and radiographic diagnosis of nonprogressive DIPG who underwent standard radiation therapy were eligible for enrollment . CED of 124I-Omburtamab is a monoclonal antibody that targets the membrane-bound protein CD276 (B7-H3), an immune modulator part of the B7 superfamily overexpressed in DIPG and other pediatric CNS cancers - and [18F]-labeled agents [124I] allows for more accurate long-term tracking of drug behavior, the rapid decay of [18F] , which reduces ionizing radiation to a patient, and its ready availability in cancer centers that use fludeoxyglucose for tumor mapping, make [18F] a preferred isotope for translation, especially if a patient must receive multiple drug doses. Novel key aqueous radiochemistry allows to avoid the use of protective groups during radiosynthesis, thus making the process easier, more rapid, and non-interfering with the molecule\u2019s original binding pocket [18F]\u2013[19F] PET isotopic exchange radiolabeling of drug molecules that bear complex functionality. Agents such as antibodies and peptides have long in vivo half-lives, making [124I] more suitable for their long-term imaging.The use of PET has evident advantages over the delivery of non-imageable agents. However, the number of therapeutics that can be directly imaged is limited. Nonetheless, recent developments in synthetic radiochemistry have expanded the library of compounds that can be modified and transformed into theranostics, maintaining the original compound\u2019s bioactivity. For instance, we have experience in generating [d agents ,78,79,80g pocket ,82. ThisLogically, maximal tumor coverage by the administered therapeutic compound is a desired goal. Simply, there should be at least complete overlap between the Vtum and Vd. Using that criterion as a measure of successful drug administration, our estimations of drug\u2013tumor intersect reveal large variance, ranging between 25% and 96%. Logically, smaller Vtum and greater Vi would both lead to greater Vd and hence tumor intersect. In addition, enhanced targeting to avoid longitudinal white matter tracts, pial or ependymal surfaces, and necrotic/cystic regions will undoubtedly play a role in optimizing tumor coverage. In future trials aimed at demonstrating clinical benefit, degree of tumor coverage will be monitored for any importance of outcome.It is essential to maximize tumor coverage with the therapeutic infusate given, both over space and time. Failure to do so could result in the development of resistance and, eventually, tumor recurrence ,84,85. A124I-8H9, tolerating the procedure well; however, each intervention required a new surgery [A second issue pertains to the behavior of infusates following CED; since convection relies on the establishment of a pressure gradient between infusion front and brain parenchyma, infusates are most likely to fall into low pressure wells, such as tumor cystic components or catheter tract (backflow). To overcome these issues, new step catheters are being developed to reduce backflow ,90,91,92 surgery .124I-8H9, the distributive volume and pharmacokinetics profile of most infused chemotherapeutics cannot be readily visualized\u2014monitoring tools like CSF sampling or tissue biopsy are burdensome and carry significant morbidity. Co-infused agents, albeit effective at approximating distributive features at time 0, are not reliable longitudinally [These issues are compounded by the lack of accurate and non-invasive drug monitoring tools. With the exception of radiolabeled agents as udinally . For thiudinally ,94,95 orudinally ,75,96,97CED has the potential of achieving high regional drug concentrations while limiting overall body exposure to a therapeutic. It remains unclear, however, if such an approach could be curative for DIPG. In fact, various recent studies have shown how, at the time of autopsy, up to a third of patients had leptomeningeal disease spread and a fourth had disease outside the brainstem . Other sIt is clear, therefore, how CED could achieve regional disease control but fail at covering other distant areas. Nonetheless, it holds the potential for controlling brainstem pathology and, if coupled with other innovative approaches, such as craniospinal radiation or intrathecal delivery of chemotherapeutics, could change, at least in part, the dire prognosis . The use124I-8H9 (NCT01502917) [The best agent (or combination of agents) to be given via CED is still matter of debate; however, the last few years have seen an increase in CED-based clinical trials for DIPG; besides 1502917) , IL13-Ps1502917) ,99,100. Our experience shows how CED is a safe technique in treating DIPG and, if further developed, could hopefully achieve local tumor control. However, numerous hurdles \u2014 ranging from further understanding of pharmacokinetics to optimization of therapeutic agent \u2014 remain to be overcome before such a goal could be realistically reached. Further, given DIPG\u2019s behavior and early distant spread, CED will most likely be one tool among many in the arsenal necessary to tackle DIPG and change its otherwise abysmal prognosis."} +{"text": "In this review, we summarize the recent progress in understanding the Ca2+/CPKs signal pathway governing PT growth. We also discuss how this pathway regulates PT growth and how reactive oxygen species (ROS) and cyclic nucleotide are integrated by Ca2+ signaling networks.Pollen tube (PT) growth as a key step for successful fertilization is essential for angiosperm survival and especially vital for grain yield in cereals. The process of PT growth is regulated by many complex and delicate signaling pathways. Among them, the calcium/calcium-dependent protein kinases (Ca In plants, there are four main classes of calcium sensors: calmodulin (CaM) or CaM-like proteins (CMLs), calcineurin B-like proteins (CBLs), CBL interacting protein kinases (CIPKs), and the calcium-dependent protein kinases (CPKs) and their relatives, CDPK-related kinases growth, which is crucial for sexual reproduction in flowering plants. Successful fertilization begins with pollen grains landing on the stigma and germination of the PT. Upon pollen landing on the stigma, the PT rapidly elongates and penetrates the transmitting tract to deliver the immotile sperm to the ovule for double fertilization and cyclic nucleotide.The calcium ion , the cyclic nucleotide-gated channels , and regulation of SACs, as well as the contribution of F-actin and ROP1 signaling followed by a PKD and an auto-inhibitory junction domain (JD) that is linked to the C-terminal calmodulin-like domain (CaMLD) with EF-hand Ca2+-binding sites impaired both the pollen germination and growth pathway awaits further confirmation negatively affect PT elongation by mediating the Ca2+-dependent inhibition of the inwardly rectifying K+ channels and likely affect pollen development by transcriptional control of gene expression was inhibited by AtCPK24, which is phosphorylated and activated by AtCPK11 activation is confirmed by reverse genetics and electrophysiology is functionally validated for Ca2+ influx across the plasma membrane of PT. Some research reveals a potential feed-forward mechanism in which CPK32 activates CNGC18, further promoting calcium entry during the elevation phase of Ca2+ oscillations in the polar growth of PTs NADPH oxidases and also involved in PT growth. In Arabidopsis, RBOHH and RBOHJ were revealed to not only slow down PT growth but also maintain PT integrity when regulated by the RALF-BUPS/ANX complex and numerous pistil factors . ROS generated by NADPH oxidases (NOXs) that are shown to be involved in various processes in PT growth, including germination, polarized, and ovule-targeted growth, and PT burst during fertilization need exhaustive investigation (2+/CPKs pathway with other signal pathways will lead to important insights into the mechanisms of PT growth. The progress of experimental techniques such as various omics techniques, Y2H screens, CRISPR/Cas gene editing, and RNAi by directly adding the siRNAs into the PT culture medium (2+/CPKs signal pathway in PT growth.Although substantial progress has been made in the past decades, the mechanism of the Catigation . Moreovee medium , variouse medium , microfle medium , and come medium will proHY, CY, SY, and YZ wrote the manuscript. FY, NC, XL, YL, and XH revised and critically evaluated the manuscript. All authors contributed to the article and approved the submitted version.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Simultaneous visualisation of vasculature and surrounding tissue structures is essential for a better understanding of vascular pathologies. In this work, we describe a histochemical strategy for three-dimensional, multicolour imaging of vasculature and associated structures, using a carbocyanine dye-based technique, vessel painting. We developed a series of applications to allow the combination of vessel painting with other histochemical methods, including immunostaining and tissue clearing for confocal and two-photon microscopies. We also introduced a two-photon microscopy setup that incorporates an aberration correction system to correct aberrations caused by the mismatch of refractive indices between samples and immersion mediums, for higher-quality images of intact tissue structures. Finally, we demonstrate the practical utility of our approach by visualising fine pathological alterations to the renal glomeruli of IgA nephropathy model mice in unprecedented detail. The technical advancements should enhance the versatility of vessel painting, offering rapid and cost-effective methods for vascular pathologies. Vessel painting is a method of labelling the entire vasculature of small mammals through perfusion of a lipophilic carbocyanine dye, DiI 9. Recently, we improved labelling intensity and uniformity, as well as the reproducibility of the technique, by introducing a neutral liposome and a hydrophilic DiI analogue10.Blood vessels form a network that delivers molecules and cells throughout the body. Because of their three-dimensional (3D) nature, it is difficult, or at least painstaking, to understand their distribution by observing conventional histological sections. Recent advancements in optical sectioning microscopy, such as confocal, multi-photon, and light-sheet microscopies, have made volume imaging of vasculature much easier. These fluorescence-based microscopies require effective and specific labelling techniques for imaging vasculature in 3D specimens. These techniques include genetically encoded fluorescent proteins, specific probes such as antibodies and lectins, or infusion of space-occupying materials9. Third, its compatibility with various tissue clearing protocols has not been tested, with the exception of an expensive proprietary reagent of unknown composition, FocusClear7. Finally, its compatibility for multi-labelling with other types of probes, such as antibodies or fluorescently labelled small molecules, except for nuclear staining10, has not been carried out.Although vessel painting is an easy and cost-effective technique to label vasculature intensely, it has several limitations. First, the excitation/emission spectra of DiI largely overlap with popular red fluorophores, such as Alexa 568 and mCherry, which limits the combination of probes for multi-labelling. Second, liposome-mediated vessel painting has only been applied to the vasculature of the central nervous system (CNS) of small rodents and its applicability to other organs needs to be tested12 that is incorporated within our two-photon microscope. Then, we selected the mouse kidney as a model organ to demonstrate the versatility of vessel painting combined with various histochemical techniques. To this end, we developed a thin-sectioning-free and rapid multi-labelling method for 3D visualisation of renal structures, especially glomeruli. Finally, we combined vessel painting with a tissue clearing protocol and other labelling methods to image pathological changes of the glomeruli of hyper-IgA (HIGA) mice, an IgA nephropathy model strain15. Confocal and two-photon microscopies of the multi-labelled glomeruli from HIGA mice successfully visualised minute lesions that had only been observed with transmission electron microscopy previously. With the novel aberration-correction technique, it was demonstrated that two-photon microscopy could visualise entire glomeruli at subcellular resolution in renal specimens and the alteration of podocyte distributions in HIGA mice.In this study, we aimed to extend the utility of vessel painting by overcoming the limitations mentioned above. We tested several commercially available DiI analogues and successfully introduced three additional dyes with green (DiO), deep red (DiD), and near-infrared (DiR) fluorescence to this technique to increase the number of choices for available colours. Then, we confirmed that the liposome-mediated vessel painting is applicable for imaging the vasculature of various non-CNS organs. We also sought tissue clearing protocols that are compatible with vessel painting. In this exploration, we also developed a novel aberration-correction technique that makes a single objective compatible with immersion fluids with a broad range of refractive indices (RIs), from water to tissue clearing reagents with high RIs. This novel technique improves the image quality deteriorated by the RI mismatch between an objective and an immersion medium through wavefront control of the incident light using a spatial light modulator (SLM)10. Therefore, for clarity, we designate the number of carbons in the alkyl chains in parentheses after the name of the dye .DiI and its analogues are hydrophobic carbocyanine dyes that can label a plasma membrane by inserting alkyl chains into the lipid bilayer. In liposome-mediated vessel painting, carbocyanine dye molecules are first inserted into the liposomal membrane and the liposomes fuse with the plasma membrane upon perfusion has been the only carbocyanine dye used for liposome-mediated vessel painting. The use of other DiI analogues with different fluorescence properties for this technique could be convenient. Therefore, we tested commercially available DiI analogues with green , deep red [DiD(C18)], and near-infrared [DiR(C18)] fluorescence for liposome-mediated vessel painting , Neuro-DiO(C18), DiD(C18), as well as DiI(C12) and imaged cerebral vasculature with confocal microscopy can improve imaging depth by approximately 20% compared with DiO(C14), for both brain and kidney tissue, under the same excitation wavelength condition.Generally, probes with longer emission wavelengths allow deeper imaging. In our study, we evaluated the effect of the emission spectra of the carbocyanine dyes on deep-tissue imaging by two-photon microscopy Fig.\u00a0. We foun17, ScaleSQ(0)18, and OPTIClear19. Although those protocols were originally developed for brain tissue, we also tested their clearing efficiency for kidney tissue.Next, we tested the combination of vessel painting and tissue clearing protocols for deeper imaging. Since lipophilic carbocyanine dyes get extracted along with lipids by organic solvents or detergents, we selected three tissue clearing protocols that are free from these reagents: SeeDBIn this experiment, we used a novel SLM-based technique to correct aberration that occurs at the interface of an objective and an immersion fluid. An SLM incorporated into the two-photon microscope modulates the incident light wavefront, cancelling the aberration caused by the mismatch of the recommended RI of an objective with the actual RI of the medium in which the sample is immersed , or OPTIClear, as well as control samples in phosphate buffer saline (PBS). We confirmed that all three tissue clearing protocols are compatible with vessel painting and improve imaging depth in both brain and renal tissues and cleared with SeeDB, Scaues Fig.\u00a0a,b. Amonues Fig.\u00a0. Our abeNext, we attempted to develop a multi-labelling method with various probes after vessel painting. Although carbocyanine dyes are incompatible with common permeabilisation treatment with detergents, we expected that fixed dead cells would be macromolecule permeable. We also assumed that a porous tissue is especially favourable for the diffusion of probes. Hence, we explored the possibility of permeabilisation-free post-fixation labelling with kidney slices.We first performed preliminary experiments to test whether a small probe and an antibody can label 3D structures on renal slices without permeabilisation with a detergent and then triple-stained with DAPI, anti-\u03b1-tubulin antibody, and rhodamine-phalloidin to visualise nuclei, cell bodies and major processes of podocytes, and foot processes, respectively. Confocal microscopy of those quadruple-stained glomeruli revealed that the phalloidin-labelled foot processes of HIGA mice include numerous foam-like structures, known as glomerular basement membrane (GBM) nodules20 , label cells by inserting their alkyl chains into the lipid bilayer of the plasma membrane22. Those dyes are poorly soluble in an aqueous medium and can be applied in crystalline form for axon tracing experiments23. During vessel painting, the aggregation of hydrophobic dyes can result in heterogeneous staining due to the occlusion of capillaries10. Liposomes are the means of choice to deliver various substances, including hydrophobic ones, to biological systems24. It has been reported that liposomes can deliver hydrophobic fluorescent dyes to the plasma membrane of cultured cells through membrane fusion25. The liposome-mediated vessel painting technique was devised based on the hypothesis that the reproducibility and staining efficiency of vessel painting would be improved by the prevention of carbocyanine dye aggregation10.To visualise vasculature, several methods, such as genetically encoded fluorescent marker proteins, fluorescently labelled probes, and fluorescent space occupants, have been widely used27. An important factor for successful vessel painting is the hydrophobicity of the carbocyanine dye used. A dye that is too hydrophobic aggregates readily in an aqueous environment and results in reduced labelling intensity and reproducibility10. Hydrophobicity for the carbocyanine dyes used in this study appears to be affected by three factors. First, when the fluorophore is the same, a dye with shorter alkyl chains is more hydrophilic. For example, DiO(C18) and DiI(C18) were almost-completely and partially insoluble in ethanol at a concentration of 5\u00a0mM, respectively, while DiO(C14) and DiI(C12) were readily soluble. Second, when the lengths of alkyl chains are the same, a carbocyanine dye with a longer emission wavelength is more hydrophilic. DiD(C18) and DiR(C18) dissolved easily in ethanol at 5\u00a0mM, while DiO(C18) and DiI(C18) did not. This may be explained by the fact that a carbocyanine dye with a longer wavelength has a longer polymethine linker28 showed much better solubility in ethanol than DiO(C18), its liposome solution sometimes caused leakage of the perfusate from the airway. Similar leakage was frequently observed in our previous study with DiI(C18)10. The cause of the leakage is likely to be the aggregation of highly hydrophobic dye molecules in the aqueous working solution and the subsequent occlusion of the lung capillaries by those aggregates. Occluded capillaries are vulnerable to rupture due to the increase in local perfusion pressure. In fact, we have observed occlusion and rupture of capillaries in mouse brains perfused with DiI(C18)10. Therefore, the hydrophilicity of Neuro-DiO(C18) might not be sufficiently high for liposome-mediated vessel painting. The factors mentioned above may also affect the diffusion rate of a carbocyanine dye in the membranes of axons. Indeed, DiI(C18) and DiD(C18) show faster diffusion rates than DiO(C18)30. Less stackable dyes may diffuse faster on two-dimensional membranes because of a reduced tendency to form large multimers.In this paper, we have introduced new carbocyanine dyes, DiOC14), DiD(C18), and DiR(C18), for liposome-mediated vessel painting and DiD(C18) with a two-photon laser and confirmed DiD(C18) is more favorable for deeper imaging of highly scattering fixed tissues36. In this experiment, it was possible to simultaneously excite both dyes via two-photon laser excitation at 820\u00a0nm, although DiD(C18) has almost no absorption around 400\u00a0nm 18. SeeDB is one of the simplest and the least expensive methods and involves immersing samples in a graded series of fructose solutions17. ScaleSQ(0) is a detergent-free variation of ScaleS. It shows excellent preservation of tissue structure and even transmission electron microscopy is possible after tissue clearing18. OPTIClear has strong clearing capacity and provides good results even with over-fixed specimens, which are not suited to most aqueous-based clearing methods19. Those protocols were originally designed to clear brain tissue and the application to mouse kidney tissue has been reported only for SeeDB41. To the best of our knowledge, these three protocols have not been compared so far in the literature. We confirmed that all of the tested protocols can be combined with vessel painting and strongly improve the imaging depth in two-photon microscopy. Among them, OPTIClear provided the best transparency for both brains and kidneys. However, it should be noted that tissue clearing protocols differ not only in terms of achievable transparency but also in terms of cost, time required, procedure complexity, and compatibility with probes. For example, OPTIClear produces excellent transparency, but uses relatively expensive reagents compared to most other tissue clearing methods. The best protocol can differ depending on the research target or means of observation. Compatibility with DiIs was also demonstrated for a recently developed tissue clearing protocol, MACS42. This protocol also appears promising for clearing tissues after liposome-mediated vessel painting.Tissue clearing techniques that make various tissues and organisms transparent43. To image samples in various immersion media with different RIs, several different objectives are generally required. For example, objectives customised for different tissue clearing solutions are commercially available44. Another choice is to use an objective with a correction collar. In this case, the physical adjustment of the collar by hand or an automated system is required45.In the present study, we introduced a novel technique to correct the spherical aberration caused by a mismatch between the optimal RI for an objective and the actual RI of an immersion medium. The reduction in image quality generated by such aberrations is more severe for long-working-distance (WD) objectives, which are typically used for two-photon microscopy of cleared specimens, because the effect of the aberration is proportional to the thickness of a medium between the objective and its focus46. This makes the optical system simpler and less expensive. With our aberration-correction technique, we used a single water-immersion objective to successfully acquire high-quality images from mouse brain and kidney tissues immersed in high-RI clearing solutions.Our aberration-correction technique is based on wavefront modulation by an SLM. An SLM incorporated within a two-photon microscope can electrically correct the deviation of the focal point of an objective immersed in a medium with non-optimal RI value to reduce spherical aberration. It allows the adoption of a single immersion lens for a variety of media with a wide range of RI, from water (RI\u2009=\u20091.33) to various tissue clearing solutions (RI\u2009=\u2009~\u20091.50), and even epoxy resin (RI\u2009=\u20091.59). Furthermore, our technique works if the RI of a medium is known and therefore does not require feedback from a wavefront sensor for adaptive optical aberration correctionVessel painting has been mainly used for single staining. To make the technique more versatile, we attempted to combine it with other histological methods, which are applied to the tissue after vessel painting. As mentioned above, permeabilisation with a detergent cannot be applied to tissues labelled with a lipophilic carbocyanine dye used for vessel painting. However, we expected that at least the near-surface volume of the tissue slices could be labelled by other probes, such as an antibody, since fixed dead cells lose their selective permeability for macromolecules.In the present study, we selected the kidney as a model organ for two reasons: 1) it is a porous organ and hence is expected to be favourable for probe penetration, and 2) multi-labelling and volume imaging of blood vessels and associated structures in renal glomeruli are helpful for understanding the pathology of many kidney diseases. The glomerulus is a unit for ultrafiltration composed of the glomerular capillary tuft, GBM, and podocytes. As expected, entire glomeruli exposed to the surface of renal slices could be labelled with small probes and even antibodies without specific permeabilisation steps in only a 1\u00a0h incubation period. The omission of thin sectioning enabled the imaging of 3D glomerular structures that had not been mechanically disturbed.48. The resolution obtained in the present study should be attributed to the use of a small probe , the omission of thin-sectioning (which disrupts fine structures), and the use of high-NA objectives or, or in combination with, the aberration correction technique. Although electron or super-resolution microscopies have, of course, better resolution, our strategy does not require specialised skills or equipment and can be performed as a simple extension of conventional histological analyses. On the larger scale, the combination of the fluorescent labelling, tissue clearing, and light-sheet microscopy allows the 3D visualisation of the vasculature of an entire mouse organ50. However, this approach is not suitable for the detailed observation of individual glomeruli within an entire organ because of the trade-off between resolution and the size of image files and/or because of difficulties in the immunostaining of large specimens. Our approach is an easy and rapid option for 3D imaging of glomeruli, or any structures associated with blood vessels, at subcellular resolution and can fill the gap between super-resolution and large volume 3D imaging strategies.To our surprise, we found that the imaging of the intact glomeruli resolved podocyte foot processes in some detail. Foot processes are generally believed to be unresolvable via diffraction-limited optics, and therefore they have been the subject of various super-resolution microscopy studies14 to increase the incidence of glomerulonephritis with IgA deposition in the ddY strain51. HIGA mice show a rapid increase in serum IgA level at between 10 and 25\u00a0weeks of age. Proteinuria occurs in 10% of them at 10\u00a0weeks of age and the incidence and severity increase with age14. Morphologically, transmission electron microscopy has revealed that the local thickening of the GBM (GBM nodule) increases markedly in this strain15. In the present study, GBM nodules were observed as abnormal foam-like structures of foot processes , six HIGA (25\u201330\u00a0weeks old), and six BALB/c (25\u201330\u00a0weeks old) female mice were obtained from Japan SLC . They were kept under the specific-pathogen-free condition on a 12-h dark/light cycle with food and water ad libitum until use. All animal experiments were approved by the Institutional Animal Care and Use Committee of Hamamatsu University School of Medicine (Permission number: 2018012) and were performed in accordance with relevant guidelines and regulations.Lycopersicon esculentum (tomato) lectin ; rhodamine-phalloidin ; anti-\u03b1-tubulin monoclonal antibody (mAb), Alexa 488 conjugate ; anti-acetylated tubulin mAb, clone 6-11B-1 ; anti-mouse CD31 antibody, Alexa 488 conjugate, clone MEC13.3 ; 1\u00a0mg/mL 4\u2032,6-diamidino-2-phenylindole (DAPI) solution ; \u03b1-thioglycerol and iohxol ; bovine serum albumin . Other reagents were purchased from FUJIFILM Wako Pure Chemical Corporation .The following reagents were used: DiO(C14), Neuro-DiO(C18), DiR(C18) ; DiO(C18), DiI(C12), and goat anti-mouse IgG antibody, Alexa 568 conjugate ; DiD(C18) ; 1\u00a0mg/mL medetomidine hydrochloride , 5\u00a0mg/mL midazolam , and 5\u00a0mg/mL butorphanol tartrate ; Coatsome-EL-01-N ; Neutral Niosome ; DyLight 594 labelled 21 was purified by MEP HyperCel\u2122 Chromatography . Fluorescent labelling of the mAb with fluorescein isothiocyanate (FITC) was performed as previously described54.Suncus mAb for type IV collagen (STF31)10, was discontinued, we tested other liposomes and found that Neutral Niosome (Nanovex Biotechnologies) provides comparable results. First, Neutral Niosome was added to PBS (10\u00a0mL per mouse) at a concentration of 1\u00a0mg/mL and incubated at 60\u00a0\u00b0C for approximately 30\u00a0min as per the manufacturer\u2019s instructions. Then, a stock dye solution was added at the final concentration of 50\u00a0\u03bcM (100 \u03bcL of a dye stock solution per 10\u00a0mL of liposome solution) and the mixture was immediately vortexed and sonicated. An injection device was constructed with a three-way stopcock and two 10\u00a0mL syringes, and a 25G butterfly needle in the dark. Working solution was prepared just before vessel painting. Since Coatosme-EL-10-N, a neutral liposome used in the original protocol55. For transcardial perfusion, the needle of the injection device was inserted into the left ventricle, and the right atrium was cut for drainage. Then, 10\u00a0mL of the dye working solution and 10\u00a0mL of 4% paraformaldehyde in 0.1\u00a0M\u00a0PB (pH 7.4) were sequentially injected at a flow rate of\u2009~\u20092\u20133\u00a0mL/min. After perfusion, the organs of interest were isolated and further fixed in the same fixative for 1\u20122\u00a0h at RT or overnight at 4\u00a0\u00b0C on a shaker. Brains and lungs were processed without further dissection. Kidneys and livers were manually cut with razor blades into 1\u20132\u00a0mm thick slices and the intestine was cut into short segments and opened with dissecting scissors before immersion fixation. For double vessel painting with DiO(C14) and DiD(C18), 10\u00a0mL working solutions of DiO(C14) and DiD(C18) were prepared separately. They were then mixed and 20\u00a0mL of the dye mixture solution was perfused.Mice were anaesthetised by intraperitoneal injection of 10 \u03bcL/g body weight of mixed anaesthetic 3. Briefly, mice were anaesthetised as described above and a mixture of 100 \u03bcL of fluorescently labelled tomato lectin and 100 \u03bcL of saline was injected into the left ventricle with a 30G needle. After the heart was allowed to beat for about 1\u00a0min, 10\u00a0mL of PBS and 10\u00a0mL of 4% PFA in 0.1\u00a0M\u00a0PB (pH 7.4) were manually perfused as liposome-mediated vessel painting. For immunostaining, 15\u00a0\u00b5g of anti-CD31 antibody was diluted to a total volume of 100 \u00b5L with saline and retro-orbitally injected55. After 10\u00a0min, perfusion fixation was performed.Tomato lectin staining was performed as previously reportedCarbocyanine dyes were dissolved in ethanol at concentrations of 2\u00a0mM and 10\u00a0mM. The absorbance of each dye was measured with a U-3500 spectrometer .17, OPTIClear19, and a detergent-free form of ScaleS [ScaleSQ(0)]18. For SeeDB, samples were incubated in a graded series of fructose solutions for at least 3\u00a0h each, and then in SeeDB solution overnight. For ScaleSQ(0), samples were incubated in ScaleSQ(0) solution at 37\u00a0\u00b0C and then in ScaleS4(0) solution overnight at RT. For OPTIClear, samples were incubated in OPTIClear solution at least 4\u00a0h at RT. In all steps, samples were gently agitated on a shaker.Fixed brain and kidney were washed three times with PBS, for 10\u00a0min each, and subjected to one of the following tissue clearing protocols: SeeDBSamples were placed in a glass-bottomed dish and imaged with an SP8 microscope equipped with HC PL APO CS2 20\u2009\u00d7\u2009/0.75 DRY and HC PL APO CS2 63\u2009\u00d7\u2009/1.40 oil objectives (Leica). The pinhole was closed to its minimum diameter during all observations. Laser power at the sample surface was set, in each experiment, to the maximum possible value that did not result in signal saturation and was kept constant throughout each imaging session.58. A water immersion objective (XLUMPLFLN20XW 20\u2009\u00d7\u2009/1.0 Olympus) was used for all the experiments performed in this study. We incorporated an SLM into the two-photon microscope to allow electrical correction of aberrations12. We predicted a wavefront aberration according to the following equation and cancelled it by applying a reverse wavefront:\u03bb is the wavelength of the excitation beam, \u03c1 is the normalised pupil radius, \u03b7 is the factor for changing the depth of the focal spot, n1 and n2 are the RIs of water and tissue clearing solutions, respectively, and NA and WD are the numerical aperture and the working distance of the objective used.Samples were attached to the bottom of a 60-mm plastic dish with cyanoacrylate glue and immersed in an imaging medium. Images were acquired by using a custom-made setup, as previously describedDiO, DiI, DiD, and DiR were excited with excitation wavelengths of 915\u00a0nm, 880\u00a0nm, 850\u00a0nm, and 915\u00a0nm, respectively. To compare the DiO and DiD imaging depths, the excitation wavelength was set to 820\u00a0nm, and the laser power was adjusted at the surface of the sample so as to ensure that the signal was not saturated; the power was kept constant during each imaging session.After liposome-mediated vessel painting, the kidneys were manually cut using a razor into 1\u20132\u00a0mm thick slices, washed three times for 10\u00a0min each with PBS and incubated in blocking solution for 1\u00a0h. For confocal microscopy, the samples were labelled in the blocking solution containing the following fluorescent labels , for 1\u00a0h, in various combinations: rhodamine-phalloidin (1:100), the anti-\u03b1-tubulin mAb DM1A-Alexa488 (1:100), and DAPI (1\u00a0\u03bcg/mL). The slices were washed three times for 10\u00a0min each with PBS and were imaged as described above. For two-photon microscopy, the samples were incubated in a blocking solution containing anti-acetylated \u03b1-tubulin mAb (1:100) for 1\u00a0h and washed with PBS three times (10\u00a0min each). Then, they were labelled with anti-mouse IgG Alexa 568 conjugate (1:100) for 1\u00a0h, washed three times with PBS, and fixed using 4% PFA in 0.1\u00a0M\u00a0PB (pH 7.4) for 1\u00a0h. Samples were then washed two times with PBS, transferred to PBS containing DAPI (1\u00a0\u03bcg/mL), and incubated for 30\u00a0min. Finally, they were attached to the bottom of a 60-mm plastic dish and incubated with OPTIClear solution for 3\u00a0h at 37\u00a0\u00b0C and then for at least 30\u00a0min at RT. All steps were performed at RT (except for the incubation in OPTIClear solution) with gentle agitation.Spectral data were handled with Igor Pro software . Confocal and two-photon images were acquired via LAS X software and custom-made software, respectively. Optical section images were manipulated with ImageJ software. Chemical structures were drawn with the ACD/ChemSketch freeware . Figures were prepared with Adobe Photoshop 2019 and Adobe Illustrator 2019 .Supplementary FiguresSupplementary Movie 1Supplementary Movie 2Supplementary Movie 3Supplementary Movie 4"} +{"text": "Streptobacillus moniliformis following a bite, scratch, or contact with excrement. Only 26 cases of native valve endocarditis have been reported to date. We could find no other reports of severe Streptobacillus endocarditis requiring valve replacement in a young, pregnant patient.Rat bite fever is a systemic febrile illness caused by infection with the Gram-negative bacillus Propionibacterium spp. was isolated from the mitral valve tissue on Columbia agar incubated anaerobically. Anaerobic and aerobic cultures of the valve tissue on all other broths and agars remained negative at 14\u2009days. Hematoxylin and eosin stains showed a fibro-inflammatory vegetation. Aggregates of rod-shaped bacteria were identified on Warthin Starry/Steiner stain. Bartonella titers were positive for B. henselae IgG 1:256, IgM <\u20091:20. Brown-Hopps Gram stain, AFB, and GMS stains for bacterial and fungal microorganisms were negative. Broad range bacterial PCR and sequencing of a segment of 16\u2009s rRNA gene of the valve tissue matched to Streptobacillus sp. (genus level) and most closely related to Streptobacillus moniliformis.A pregnant patient sought care for right leg pain, fevers, left upper quadrant pain, generalized weakness, fatigue, and inability to bear weight on her right leg. She had a syncopal episode 9 months earlier, resulting in a mandibular fracture and internal fixation hardware. Her pregnancy was complicated by hyperemesis and weight loss. Her pets included a rescued wild bird, a cat, and four rats. Her parents rescued stray cats, and she recalled multiple cat bites and scratches since childhood. She denied injection drug use. Ultrasound indicated a right popliteal artery thrombus. Transesophageal echocardiogram revealed a 2\u2009cm\u2009\u00d7\u20090.7\u2009cm vegetation. Angiography demonstrated multiple splenic infarcts and bilateral renal infarcts. She underwent mitral valve repair. The mitral valve Gram stain demonstrated 2+ Gram-negative rods, rare Gram-positive rods, and moderate white blood cells. This case demonstrates diagnostic and therapeutic challenges associated with a relatively uncommon cause of endocarditis. The diagnosis of rat bite fever was delayed due to symptoms of a concomitant pregnancy. Other confounders included possible alternative sources or co-infections with another zoonosis from multiple pets, and an odontogenic source due to presence of exposed jaw hardware. Rat bite fever typically begins with a bite or other exposure, followed by abrupt onset of systemic illness, including intermittent relapsing fever, arthritis, and rash 3\u2009days to 3\u2009weeks later. A maculopapular, petechial, or purpuric rash develops in approximately 75% of those affected in the first symptomatic week , 2. OverStreptobacillus endocarditis in a pregnant patient. This case is noteworthy because it demonstrates diagnostic and therapeutic challenges associated with a relatively uncommon cause of endocarditis. For example, the typical symptoms of rat bite fever were masked by the symptoms of a concomitant pregnancy. Other diagnostic confounders included possible alternative sources or co-infections with another zoonosis from multiple exposures, and an odontogenic source due to presence of exposed jaw hardware. It is novel because we could find no other reports of severe Streptobacillus endocarditis requiring valve replacement in a young, pregnant patient.We sought to alert clinicians to the challenges and potential pitfalls in the diagnosis and management of recurrent A previously healthy 24\u2009year-old female, who was 13\u2009weeks pregnant, sought care for 2 weeks of severe right leg pain and fevers (39.4 degrees C recorded at home), which progressed to bilateral calf and left upper quadrant pain, generalized weakness, fatigue, and inability to bear weight on her right leg. Her past medical history was notable for a syncopal episode 9\u2009months earlier, resulting in a mandibular fracture and internal fixation hardware. Her pregnancy was complicated by hyperemesis and weight loss. Her pets included a rescued wild bird, a cat, and four rats. Her parents rescued stray cats, and she recalled multiple cat bites and scratches since childhood. She also allowed her rats to nibble on her fingers, most recently several weeks prior to admission. The patient also complained of exposed mandibular hardware. She denied injection drug use but used marijuana for nausea. She reported a history of rash when taking penicillin.She was pale and had a temperature of 38.4 C, with diminished pedal and tibial pulses on the right. She had a 2/6 high pitched blowing holosystolic murmur, radiating to the axilla. She also had left upper quadrant tenderness and a palpable spleen. There were no rashes or lymphadenopathy. A metal plate was visible below the lower right teeth, with generally good dentition. The right calf was tender to palpation. A healing bite wound on her index finger was clean, without swelling or tenderness. Laboratory evaluation revealed iron deficiency anemia, normal renal function, normal hepatic enzymes, and a normal leukocyte count and differential.An ultrasound indicated a right popliteal artery thrombus. Transesophageal echocardiogram revealed a 2\u2009cm\u2009\u00d7\u20090.7\u2009cm vegetation on the atrial side of the posterior mitral valve leaflet. Angiography demonstrated multiple splenic infarcts and bilateral renal infarcts.Streptobacillus moniliformis given the history of rat bites and Streptococcus viridans/HACEK given history of oral surgery with exposed metal plates. The initial antibiotic regimen was ceftriaxone 2\u2009g q12h and vancomycin 1.5\u2009g q12h. Blood cultures and the popliteal thrombus did not initially grow any organisms, so clindamycin was added for empiric anaerobic coverage, as well as for Streptobacillus moniliformis (given the history of rat bites from her pet rats) and Streptococcus viridans/HACEK . On hospital day 4, she underwent mitral valve repair using a 27\u2009mm band. The mitral valve Gram stain demonstrated 2+ Gram-negative rods, rare Gram-positive rods, and moderate white blood cells. Propionibacterium spp. was isolated from the mitral valve tissue on Columbia agar incubated anaerobically. Anaerobic and aerobic cultures of the valve tissue on all other broths and agars, including Chocolate, charcoal yeast extract, and serum-supplemented, remained negative at 14\u2009days. Hematoxylin and eosin stains showed a fibroinflammatory vegetation ).Right popliteal thrombectomy was performed. Blood and popliteal thrombus cultures remained negative. The patient was treated empirically for The patient underwent penicillin skin testing, performed by a trained allergy/immunology physician. The skin prick test was administered on the volar forearm with benzylpenicilloyl polylysine as the major determinant, penicillin G 10,000\u2009U/mL as the minor determinant, histamine 6\u2009mg/mL as the positive control, and sodium chloride 0.9% as the negative control. After a 15-min observation period and with negative skin prick results, intradermal testing was administered using the same materials except with a histamine concentration of 0.02\u2009mg/mL. A positive test result was defined as a wheal \u22653\u2009mm as compared with the negative control.Streptobacillus, this time with a six-week course of Penicillin-G and a two-week course of synergistic gentamicin. The repeat mitral valve Gram stain was negative for organisms, and valve fungal, aerobic, and anerobic cultures did not yield any growth.After testing excluded penicillin allergy, penicillin G 24\u2009mU daily (4\u2009mU every 4\u2009h) was started. She completed 6 weeks of that followed by 2 weeks of oral amoxicillin-clavulanate. During that period, she required brief readmissions for heart failure and dysrhythmias but remained afebrile without signs of infection. Two weeks later she was readmitted for heart failure and fever. One of her pet rats had given birth to a large litter and she reported new rat bite exposures. She was found to have a new 8\u2009mm anterior mitral valve vegetation with valve perforation. She underwent elective dilation and evacuation of the pregnancy to allow for definitive bioprosthetic mitral valve replacement. Blood cultures were persistently negative in the post-operative period, and she was treated empirically for Streptobacillus moniliformis in North America, or the spirochete Spirillum minus in Asia, following a bite, scratch, or contact with excrement [S. moniliformis-contaminated food [S. moniliformis colonizes the nasopharynx of 50\u2013100% of healthy wild, lab, and pet rats, and is also excreted in the urine [Rat bite fever is a systemic febrile illness caused by infection with the Gram-negative bacillus xcrement , 2. A thted food . Case reS. moniliformis is a pleomorphic , filamentous, Gram-negative, nonmotile, and non-acid-fast rod [fast rod . HoweverRat bite fever typically begins with a bite or other exposure, followed by abrupt onset of systemic illness, including intermittent relapsing fever, arthritis, and rash 3\u2009days to 3\u2009weeks later. A maculopapular, petechial, or purpuric rash develops in approximately 75% of those affected in the first symptomatic week , 2. OverStreptococcal spp. or other oral flora equally likely pathogens. Fourth, the diagnosis of the primary infectious agent in this case was further complicated by the positive Bartonella IgG titers. Bartonella IgG titers between 1:64 and 1:256 represent possible active or recent Bartonella infection; our patient\u2019s IgG titers were 1:256. IgM titers >\u20091:20 strongly suggest current infection; our patients IgM titers were negative. Furthermore, she had no characteristic cutaneous lesions or lymphadenopathy, and there was no Bartonella signal detected on the PCR. Taken together, the above essentially rule out Bartonella endocarditis. Another confounder was the identification of Propionibacterium spp. on the mitral valve specimen. Propionibacterium spp. are a very rare cause of infectious endocarditis, and almost always cause prosthetic valve endocarditis. Here, they were most likely a contaminant. Finally, she suffered septic emboli to the right popliteal artery, spleen, and kidneys - a rare complication of rat bite fever endocarditis [Our patient\u2019s presentation was notably atypical in multiple respects. First, the hyperemesis she experienced was likely wrongly attributed to pregnancy and contributed to delayed diagnosis. Second, she denied a history of rash or arthralgias. Third, eroded intra-oral hardware made fastidious S. moniliformis is difficult, requiring a high index of suspicion. It is fastidious, requiring microaerophilic conditions [Streptobacillus genus and not the species [Diagnosis of Streptobacillus sp. (genus level) and most closely related to Streptobacillus moniliformis (species level). Speciation of 16\u2009s rRNA gene sometimes can be difficult and erroneous. The patient was counseled about the risks associated with rats especially pertaining to bites.In this case, blood and valve cultures were persistently negative, despite repeated anaerobic and aerobic sub culturing on various agars and broths including Chocolate, Charcoal Yeast Extract, Columbia, and serum supplemented media. The initial mitral valve specimen was collected surgically 4\u2009days after initiation of empiric antibiotic therapy, likely contributing to the difficulty in culturing the specimen. Broad range bacterial PCR and sequencing of a segment of 16\u2009s rRNA gene matched to S. moniliformis endocarditis is dual therapy with high-dose penicillin G for 4 weeks in combination with streptomycin or gentamicin for 2\u2009weeks [Recommended treatment of 2\u2009weeks , 2. Ceft 2\u2009weeks . In thisStreptobacillus moniliformis amnionitis [Aerococcus christensenii, Gemella spp., Snethia spp., Parvimonas micra, and Streptobacillus moniliformis in a pregnant woman [A literature review was performed by a professional medical librarian using the search strategy presented in the supplemental file. This revealed only two cases, but neither involved endocarditis , 7. One nionitis , and theOur report is limited by the usual features of a single case report, and that more and different samples were not available for duplicate and triplicate laboratory testing. Despite these limitations, it includes the key laboratory and management detail useful for providers who may encounter this in the future, and it appears to be a first reported case based on a thorough literature review described in the supplemental material.This case highlights the diagnostic and management challenges of an infrequent cause culture negative endocarditis that was further complicated by pregnancy, thromboembolic phenomenon, and a patient\u2019s undaunted love of her pets."} +{"text": "This special issue is dedicated to Prof. Dr. rer. nat. Thomas Bley in honor of his 70th birthday. Professor Bley is a remarkable person and a commendable scientist. He has rendered outstanding services not only to bioprocess engineering in Germany, but also across national borders.Prof. Bley was born in Gro\u00dfenhain in 1951 and studied mathematics at the TU Dresden from 1971 to 1975, before he received his doctorate in mathematical biology from the Academy of Sciences of the GDR in 1981. He completed his habilitation in the field of biotechnology at the University of Leipzig in 1990 and thereupon, he was granted the teaching license in this scientific field in 1995. In 1996, Prof. Bley was appointed to the Chair of Biochemical Engineering at the TU Dresden, which he headed until his retirement in 2017. To this day, he is as senior professor closely associated with the institute and the chair and is held in high esteem by staff, doctoral candidates and students alike.During his teaching career, he led 250 graduate engineers and 45 doctoral students to successful graduation. The \u201cDiplomingenieur\u201d degree is very important to Professor Bley. Therefore, he successfully committed himself to maintaining this degree at the Faculty of Mechanical Engineering of the TU Dresden. Many of the engineers he educated work in leading positions today, in industry and science, nationally and internationally. Professor Bley is particularly proud that one of his first doctoral students and preferred candidate, Prof. Thomas Walther, was appointed his successor to the Chair of Biochemical Engineering at TU Dresden in 2017.Thomas Bley has actively brought together industry and academia, young and experienced scientists and \u2010 in the course of Germany's reunification \u2010 experts from Eastern and Western Germany. In addition, he provided scientific impetus in numerous ways\u201d.With more than 170 publications, which focus on the mathematical description of complex cell\u2010cell and cell\u2010reactor interactions, Professor Bley looks back on a very successful scientific career. In addition to his scientific work, he also created important impulses nationwide through his membership in various expert committees. Since 2002, he is an elected member of the Saxon Academy of Sciences and Humanities and the German Academy of Science and Engineering. In the years 2008\u20132013, he was an elected member of the DFG Review Board for Process Engineering and Technical Chemistry. In addition, he committed very actively in the DECHEMA for many years, among others as chair of the Bioprocessing Working Group from 2009 to 2014, and from 2013 to 2016 as a member of the Board of the DECHEMA Section on Biotechnology. In 2018, Professor Bley was therefore awarded the DECHEMA Medal. The DECHEMA awards this medal for outstanding commitment in realizing its objectives and for outstanding achievements in the field of chemical engineering and biotechnology. With this award, the DECHEMA recognized his commitment to the bioprocessing community: \u201cDear Thomas, at this point, I would like to take the opportunity to thank you on behalf of all the staff of the Chair of Bioprocess Engineering for the work that you have done! The work under your leadership was a valuable enrichment for me and the other colleagues in every respect, both professionally and personally. Due to the wide range of research topics at the chair and the fruitful working environment, there was always the opportunity to dedicate oneself to new scientific areas. The research groups, which you have initiated, including plant cell and algae technology, smart labs systems, and enzyme technology, are continuing to work sustainably and they provide the ground for successful scientific careers in the future, for which your former colleagues and I are most grateful.You accompanied me on my scientific, but also personal career path as a university professor, doctoral advisor and supervisor for many years. I would like to thank you for your trust in my scientific work and the many opportunities, as well as for the fact that you always had a diplomatically advice and continuously supported my personal scientific career.I congratulate you on your 70th birthday and wish you all the best, many wonderful hours in your garden and with your grandchildren and above all: health."} +{"text": "Religious and spiritual coping strategies is one of the possible tools that can be used to deal with stress and the negative consequences of life problems and illnesses. The study aims to assess religious coping in the time of the COVID-19 pandemic.It was an online survey. The sample was collected using a snowball sampling technique as the data were collected through Google forms. The survey started on 22 April 2020 and was closed on 28 May 2020. The participants were from two countries, India and Nigeria. The inclusion criteria were age between 18 and 60\u2009years, having completed at least 10\u2009years of formal education, and have internet access. For data collection, Semi-structured proforma and Brief RCOPE was used to see the extent to which individuals engage in positive and negative forms of religious coping.A total of 647 individuals (360 from Nigeria and 287 from India) participated in the survey. A total of 188 (65.5%) participants in India reported no change in their religious activities since they heard about COVID-19, while, 160 (44.4%) in Nigeria reported a decrease in religious activities. Positive religious coping in the Nigerian population was significantly higher than the Indian population. Similarly, negative religious coping was significantly higher (for most of the items in the brief RCOPE) in the Indian population than the Nigerian population.Significant percentages of people after the COVID-19 pandemic took religious coping steps to overcome their problems. During this pandemic, positive religious coping among the Indian and Nigerian communities is more prevalent than negative religious coping. There is a substantial cross-national difference between Indians and Nigerians in the religious coping modes. Religion is an integral part of human civilization. It has a substantial impact on society and human behavior. Across the globe, people follow various religious beliefs. Religion acts as a double-edged sword, as it unites people as well as divides people. Religion influences coping positively and negatively . The copOver the past several months, the COVID-19 pandemic has adversely affected the lives of people globally. During this pandemic, people experience anxiety, depression, and panic related to COVID-19, and there is a higher perceived mental healthcare need . During India is a country where people of different religious believes reside. Following religious rituals, cultural traditions, festivals, and religious ceremonies give a unique identity. Similarly, Nigeria is the most populated African country with cultural diversity. Both India and Nigeria struggle with the population explosion and scarcity of resources. These two countries fall in lower-middle-income countries as per the world bank .However, religion is often overlooked in researches involving cross-cultural psychology . Many peGoogle forms, in which people receiving the message were requested to complete the survey and then forward the link to their close contacts in various WhatsApp groups, Facebook, Email, and Twitter platforms. The survey started on 22 April 2020 and was closed on 28 May 2020. The participants were from two countries; India and Nigeria. The inclusion criteria were age between 18 and 60\u2009years, having completed at least 10\u2009years of formal education, and have internet access. The participation was completely voluntary. The participation was confirmed once the participants gave their consent for the same.It was an online survey. The sample was collected using a snowball sampling technique as the data were collected through It included the name (voluntary), country, age, gender, education, occupation, marital status, religion, domicile, family type, either living alone or with family, and state/province of the participants\u2019 current residence.The investigators developed a questionnaire about one\u2019s belief in the religion (believer or non-believer), about their performance of religious rituals (frequency), charity habits, and any change in these habits during the COVID-19 pandemic.It is a 14-item scale to see the extent to which individuals engage in positive and negative forms of religious coping. The Positive Religious Coping subscale assesses efforts to maintain a positive connection with God, collaborate with God, find positive meaning in the stressor, and let go of negative emotions. The Negative Religious Coping subscale assesses perceptions of a disrupted or conflictual relationship with God and one\u2019s faith community, as well as a loss of faith in God\u2019s power and belief that the devil caused the stressor . The resB).The study proposal was reviewed and approved by the Institutional Ethics Committee of the study institute of India and Nigeria were consented and complete in all respects. The respondents\u2019 age ranges from 18 to 60\u2009years, whereas the Indian participants tend to be a little older with a higher mean age than Nigeria (32.30 vs. 28.30). Most of the participants, 399 (61.7%) from both countries, were male, which was higher among the Nigerian participants 225 (62.5%) compared to 174 (60.6%) in India .Although employability significantly varies slightly between the Indian 183 (63.8%) and Nigerian 183 (50.8%) sample, most of the participants were employed.Religious varies between both countries; the Hindu religion 218 (33.7%) was the most commonly reported practiced religion among Indian participants, whereas Christianity 298 (82.8%) was most frequent among the Nigerian study population. Both countries share a similar proportion in the practice of Islam religion.More than half of the participants, 428 (66.2%), domiciled in the urban area and had a nuclear family system 501 (77.4%). Nigeria sample 317 (88.1%) exhibited a slightly higher proportion of nuclear family system than 64.1% reported in 184 India, whereas the percentage of urban dwellers was higher among Indian participants than Nigeria (70.0% vs. 63.1%). Most of the respondents, 598 (92.4%), believed in religion. However, there was a significant difference in religious belief between the two countries \u2013 Indian-251 (87.5%) versus Nigeria-347 (96.4%). On current living status, 449 (69.4%) of the participants lived with their families, with similar proportions observed in both countries.Assessing the performance of religious rites, 213 (74.2%) in India and 251 (69.7%) in Nigeria offer religious rituals; of these, 96 (33.4%) and 196 (54.4%) do it always, respectively. Similarly, 188 (65.5%) participants in India reported no change in their religious activities since they heard about COVID-19, while, 160 (44.4) in Nigeria reported decreased.n\u2009=\u2009119, 41.5%) in India reported increased charity pattern since the heard of COVID-19 outbreak compared with 112 (31.4%) observed among Nigeria participants. Conversely, a notably decreased charity pattern during the outbreak was higher among participants in Nigeria than India . In looking for a more robust connection with God, 211 (73.5%) participants in India and 331 (91.1%) in Nigeria sought a stronger with God while a similar proportion of these participants from each country 229 (79.8%) and 347 (96.4%) sought God\u2019s love and care, respectively. There was a notable difference between both countries\u2019 responses when asked if help was sought from God for anger management. In India, 190 (66.2%) did seek God to let go of anger compared to 291 (80.8%) among participants in Nigeria.Significantly, participants from both countries offer charity, 222 (77.4%) in India and 304 (84.4%) in Nigeria, whereas two-fifth , , and in Nigeria, tried to put plans into action together with God, tried to see how God might strengthen them and asked for forgiveness during the outbreak respectively. However, these proportions were lower , , and among Indian participants. In both countries\u2019 response to focusing on religion to stop worrying about problems, 272 (75.56%) in Nigeria focused on religion to stop worrying about a problem; In contrast, the proportion was lower among India sample 135 (47.04%).The majority of respondents, and 77 (26.8%) wondered whether God had abandoned them and felt punished by God for lack of devotion compared to 68 (18.9%) and 89 (24.7%) in Nigeria respectively. Besides, 77 (23.7%) participants in India and 47 (13.1%) in Nigeria wondered what they had done to be punished by God. In questioning God\u2019s Love and power, a higher proportion was significantly observed in India; 77 (26.8%) and 64 (22.3%) reported to have questioned God\u2019s love and power compared to 45 (12.5%) and 17 (4.7%) in Nigeria, respectively. However, a higher proportion of the participants in Nigeria, 98 (27.2%) believed the devil made the outbreak happen while 50 (17.4%) in India .The impact of COVID-19 on society is enormous. COVID-19 pandemic resulted in significant mortality and morbidity worldwide. The devastating effect of the COVID-19 pandemic affected the general well-being, including mental health. During this challenging situation, people have difficulty coping with anxiety, depression, fear, stress, and losses . ConfronPrevious studies showed that natural disasters have a long-lasting effect on religiosity spread across generations . WhetherSmall sample size and cross-sectional study designs are significant limitations of the study, limiting the generalizability. As snow-ball sampling was done, there is a possibility of referral bias. Another limitation being, use of questionnaires in the English language in an online platform. A face-to-face interview technique using a regional language questionnaire may overcome the limitations by addressing diverse groups of people.During the COVID-19 pandemic, a significant number of people adopt religious coping measures to combat their difficulties. Positive religious coping is more common than negative religious coping among the Indian and Nigerian populations during this pandemic. There is significant cross-national variation in the religious coping styles between Indians and Nigerians."} +{"text": "Despite the risk for poor outcomes and gaps in care in the transfer from pediatric to adult care, most pediatric rheumatology centers lack formal transition pathways. As a first step in designing a pathway, we evaluated preparation for transition in a single-center cohort of adolescents and young adults (AYA) with rheumatologic conditions using the ADolescent Assessment of Preparation for Transition (ADAPT) survey.AYA most frequently endorsed receiving counseling on taking charge of their health and remembering to take medications. Less than half reported receiving specific counseling about transferring to an adult provider.p\u2009=\u20090.0002), prescription medication counseling , and transfer planning . AYA with a diagnosis of MCTD, Sj\u00f6gren\u2019s or SLE had higher self-management scores than those with other diagnoses . Non-white youth indicated receiving more thorough medication counseling than white youth .AYA with lower education attainment compared with those who had attended some college or higher had lower scores in self-management . AYA with longer duration of seeing their physician had higher transition preparation scores (p\u2009=\u20090.021).When adjusting for age, educational attainment remained an independent predictor of transfer planning (Few AYA endorsed receiving comprehensive transition counseling, including discussion of transfer planning. Those who were younger and with lower levels of education had lower preparation scores. A long-term relationship with providers was associated with higher scores. Further research, including longitudinal assessment of transition preparation, is needed to evaluate effective processes to assist vulnerable populations. The transfer from pediatric to adult care is a high-risk period for poor outcomes , 2. NatiWhile there is increasing recognition of the need to develop and evaluate transition pathways in rheumatology, few studies have directly evaluated AYAs\u2019 preparation for transition in pediatric rheumatology , 10. MosAYA ages 16 and older with a confirmed rheumatologic diagnosis by ICD9 or ICD10 code, at least 3 prior visits at our rheumatology practice, and email addresses on file were identified by an electronic medical records search. Eligible AYA were emailed a version of the ADolescent Assessment of Preparation for Transition (ADAPT) survey, a validated tool developed to assess patient perception of preparation for transition in adolescents and young adults with chronic diseases . At the 337 eligible AYA were identified and sent a survey invitation via email. 78 patients responded (response rate of 23%), of whom 77 had a visit within the past year and were eligible to complete the assessment tool. Mean age was 18.9 (range 16\u201323), 83% were female, and 86% were white. Full demographic and disease characteristics of respondents are reported in Table\u00a0Respondents most frequently endorsed receiving counseling on taking charge of their health and remembering to take medications, but fewer than half reported receiving specific counseling about transfer to an adult provider Table\u00a0.Table 2p\u2009<\u20090.001), prescription medication counseling , and transfer planning . AYA with longer duration of seeing their rheumatologist had higher transition preparation scores (p\u2009=\u20090.021). When controlling for age, educational attainment remained an independent predictor of higher transfer planning scores (p\u2009=\u20090.037), but not of other measures.AYA ages 16\u201318 had significantly lower scores in all domains compared to those 19\u2009years and older. Those with lower education attainment compared with those who had attended some college or higher had lower scores in self-management . Non-white respondents indicated they had received more thorough medication counseling than white respondents .AYA with a diagnosis of mixed connective tissue disease, Sj\u00f6gren\u2019s syndrome or systemic lupus erythematosus had higher self-management scores than those with other diagnoses reported attending 1 (23%), 2(35%), 3 (17%) or 4 (21%) rheumatology visit with their rheumatologist in the year prior to survey completion; only 4% had 5 or more visits. Those with mixed connective tissue disease, Sj\u00f6gren\u2019s syndrome or systemic lupus erythematosus (n\u2009=\u200915) reported 2 (27%), 3 (53%) or 4 (13%) visits over the same period.Survey respondents with diagnosis of JIA (Despite growing awareness of the importance of the pediatric to adult care transition for AYA with chronic illnesses, few youth in our cohort endorsed receiving comprehensive transition counseling, including discussion of transfer planning. This is similar to national data, with the National Survey of Children\u2019s Health reporting that just 18% of youth ages 12\u201317 received comprehensive services for care transition, defined as meeting with their pediatrician alone, addressing skill-building for transition, and receiving counseling about transfer to adult care if needed .We observed demographic and disease-related differences in transition preparation in our cohort. Non-white patients and those with lupus and related conditions had significantly higher domain scores, possibly reflecting greater provider awareness of these risk factors for poor transfer outcomes, or more frequent contact with their rheumatologist.Consistent with prior work showing increased transition readiness with older age \u201320, we fWe also found that AYA with less formal education had lower transfer planning scores even when adjusting for age. This disparity may reflect differences in self-management skills among AYA with less formal education . In our Our study had several limitations, including a low response rate, though age, race, and diagnosis of respondents were similar to non-respondents. As this was a single-center evaluation, our results may not be generalizable to other centers with different patient populations, referral patterns, and baseline transition processes.Following closure of the survey, our clinic began distributing a newly created written transition policy to all patients 14 and older at follow-up visits. The importance of a written transition policy has been highlighted in the Six Core Elements of Transition developed by collaborative transition workgroup of the American Academy of Pediatrics, American Academy of Family Physicians, and American College of Physicians, and promoted by the American College of Rheumatology\u2019s Transition Toolkit , 25. A wFurther research including longitudinal assessment of transition preparation is needed to evaluate effective processes to improve transition outcomes. Assessment of specific transition counseling gaps using tools such as the ADAPT, and evaluation of preparation in those with different demographic characteristics and disease features may help to effectively customize transition initiatives for vulnerable patient populations."} +{"text": "SNAIL1, ZEB1, ZEB2, TWIST1) may influence the development of HBV-related HCC.\u00a0We included 421 cases of HBsAg-positive patients with HCC, 1371 cases of HBsAg-positive subjects without HCC [patients with chronic hepatitis B (CHB) or liver\u00a0cirrhosis (LC)] and 618 cases of healthy controls in the case-control study. Genotype, allele, and haplotype associations in the major EMT regulatory genes were tested. Environment-gene and gene-gene interactions were analysed using the non-parametric model-free multifactor dimensionality reduction (MDR) method. The SNAIL1rs4647958T>C was associated with a significantly increased risk of both HCC and CHB+LC . Carriers of the TWIST1rs2285681G>C (genotypes CT+CC) had an increased risk of HCC . The ZEB2rs3806475T>C was associated with significantly increased risk of both HCC (Precessive =0.001) and CHB+LC (Precessive<0.001). The CG haplotype of the rs4647958/rs1543442 haplotype block was associated with significant differences between healthy subjects and HCC patients (P=0.0347). Meanwhile, the CT haplotype of the rs2285681/rs2285682 haplotype block was associated with significant differences between CHB+LC and HCC patients (P=0.0123). In MDR analysis, the combination of TWIST1rs2285681, ZEB2rs3806475, SNAIL1rs4647958 exhibited the most significant association with CHB+LC and Health control in the three-locus model. Our results suggest significant single-gene associations and environment-gene/gene-gene interactions of EMT-related genes with HBV-related HCC.Epithelial-mesenchymal transition (EMT) plays an important role in the development of hepatitis B virus (HBV)-related hepatocellular carcinoma (HCC). We hypothesized that germline variants in the major EMT regulatory genes ( SNAIL1 exon variant rs4647958T>C, the ZEB2 promoter exon variant rs3806475T>C and the TWIST1 promoter exon variant rs2285681G>C are associated with increased risk of HBV-related HCC.The functional The CG haplotype of the rs4647958/rs1543442 haplotype block was associated with significant differences between healthy control subjects and HCC patients. Additionally, the CT haplotype of the rs2285681/rs2285682 haplotype block was associated with significant differences between CHB+LC and HCC patients.TWIST1 rs2285681 and SNAIL1 rs4647958 showed a significant environment-gene interaction for the development of HCC.Hepatocellular carcinoma (HCC), a common malignant tumour of the digestive system, is the second leading cause of cancer-related death in China. HCC is characterized by high malignant potential, concealed pathogenesis, rapid progress, poor prognosis and a high mortality rate. It is typically diagnosed during the middle and late disease stages, when surgery is no longer a viable option . TherefoIn recent years, the significance of epithelial-mesenchymal transition (EMT) in tumours has been extensively studied. There are many complex factors that may influence the process of tumour metastasis; however, the specific underlying mechanisms are not yet clear. A great many studies have revealed that EMT plays an important role in tumour invasion and metastasis. To date, three well-established transcriptional regulatory groups have been identified as important factors in regulating the expression of EMT molecular markers . StudiesSNAIL and TWIST are the major regulators of EMT, which subsequently induces HCC in HBV-related HCC in the Han population and their relevance as potential biomarkers for HBV and HCC. This approach may help develop new therapy or individualized treatments for HBV-related HCC and chronic HBV infection.A common analysis method for genotype data is to perform a single gene locus or haplotype analysis on a single gene, that is, to detect the association between each locus or gene and disease separately. However, when we want to explain the genetic changes in complex diseases, the usefulness of this analysis is limited . BecauseCase-control studies were conducted to investigate HBV-related HCC and chronic HBV infection in northern China. To evaluate HBV-associated mutations and their correlation with HCC risk, 421 HBsAg-positive patients with HCC, 1371 HBsAg-positive patients without HCC [691 cases of chronic hepatitis B (CHB) and 680 cases of liver\u00a0cirrhosis (LC)] and 618 controls without HBV infection were enrolled. All subjects are independent of each other and are ethnically Han Chinese. All participants were recruited between January 2010 and March 2014 from the First, Second and Fourth Hospitals of Hebei Medical University and the Fifth Hospital of Shijiazhuang City. Each subject provided demographic characteristics as well as a one-time 2 mL blood sample. All subjects signed a written informed consent forms to study initiation. This study was approved by the institutional review board of Hebei Medical University .Healthy individuals were defined as (i) HBsAg, antibodies against HBc (anti-HBc) and other HBV biomarkers were free; (ii)\u00a0blood routine and biochemical indexes were normal; (iii) without a history of hepatitis B vaccination; (iv) without endocrine, cardiovascular, renal or other liver diseases. CHB patients were defined as (i) serum HBsAg was positive; (ii) HBeAg was positive; (iii) anti-HBe was negative; (iv)serum HBV-DNA >2000 IU/mL lasting for >6 months; (v) the value of alanine aminotransferase (ALT) was persistent or repeated rising; (vi) liver histology showed hepatitis. LC patients were defined by clinical manifestations of portal hypertension and imaging results of ultrasonography, computed tomography, and magnetic resonance imaging , 15. HBVThe personal information of the research subjects was obtained through questionnaires, which included the subjects\u2019 gender, age, smoking status, and drinking status. The definition of smoking and drinking here is: an individual who smokes every day and has smoked for more than 1 year is defined as a smoker, and an individual who drinks once or more a week for more than 6 months is defined as a drinker. We collected about 2 mL of anticoagulated venous blood by ethylenediamine tetra-acetic acid (EDTA) from each subject. Each subject signed an informed consent form. The study protocol adhered to the ethical guidelines set forth by the 1975 Declaration of Helsinki and was approved by the Hebei Medical University ethics committee.http://www.ncbi.nlm.nih.gov/), we selected 6 EMT gene loci located in the promoter, regulator coding region and 3\u2019-UTR. All putative functional single-nucleotide polymorphisms (SNPs) of the genes encoding the aforementioned EMT regulators with a minor allele frequency greater than 5% in the Chinese population were selected. The location information in gene region for the selected SNPs was shown in \u00ae Assay Design 4.0 Software . SNPs were genotyped using TaqMan-based PCR. Basic information for the selected SNPs was shown in According to the dbSNP database (https://sourceforge.net/projects/mdr/) were used to perform statistical analyses. Categorical variables were described using frequencies, while continuous data with abnormal distribution were described using the median and interquartile range. The comparisons of continuous data sets were done using Kruskal-Wallis H test and evaluation of differences in categorical variables between groups was done using Pearson chi-square test. The bonfferny method was used for pairwise comparisons between groups when there was a significant difference in the overall distribution of each factor in the three groups. Calculation of odds ratios (OR) and 95% confidence intervals (95% CI) was done using unconditional logistic regression. Analysis of correlations between genetic variants and HCC stages was done by Spearman\u2019s rank correlation. Linkage disequilibrium (LD) and haplotype block analyses were used to investigate the LD of EMT SNPs using Haploview 4.2 software. Multifactor dimensionality reduction (MDR) method as a nonparametric alternative was used to analyse the environment-gene and gene-gene interactions. The MDR analyses were performed by MDR 3.0.2 software. This extensive search for genetic interactions was done for HCC. Up to four loci interactions were tested using 10-fold crossvalidation in a search considering all possible SNP combinations. The SNP combination with maximum cross-validation consistency (CVC) was considered to be the best model , Haploview 4.2 software (Copyright (c) 2003-2006 Broad Institute of MIT and Harvard, United States) and MDR 3.0.2 software (P<0.05). Smoking and drinking were significantly lower in healthy patients versus in HBsAg-positive patients with and without HCC. The proportion of males was higher in the HBsAg-positive patients versus the healthy subjects, while patients older than 45 years old were more frequent in the HBsAg-positive patients with HCC. We adjusted for these factors in the multivariate logistic regression models.Baseline characteristics of the 1371 HbsAg-positive patients without HCC (CHB+LC), 421 HBsAg-positive patients with HCC and 618 healthy control subjects were shown in SNAIL1 exon variant rs4647958T>C was significantly associated with an increased risk of both HCC (Pdominant =0.020) and CHB+LC (Pdominant =0.003). The ZEB2 promoter variant rs3806475T>C was significantly associated with an increased risk of both HCC (Precessive =0.001) and CHB+LC (Precessive<0.001). Further, the TWIST1 promoter variant rs2285681G>C was significantly associated with an increased risk of HCC (Pdominant =0.016). However, no significant association was observed between any of the other loci and the risk of HbsAg-positive HBV with or without HCC. Therefore, we further analysed the SNAIL1 rs4647958T>C, ZEB2 rs3806475T>C and TWIST1 rs2285681G>C SNPs. As shown in vs. TT: OR=1.559; 95% confidence interval [CI], 1.073-2.264; P = 0.020) and CHB+LC under the dominant model. Carriers of the TWIST1 rs2285681G>C genotypes (CT+CC) had an increased risk of HCC under the dominant model.The genotype distributions of the six EMT regulators and their associations with HCC and CHB+LC are presented in OR, 2.053; 95% CI,\u00a01.372-3.072) versus those who did smoke under the dominant model versus those who did drink under the recessive model compared with those who did drink under the recessive model . Additional correlations were identified in the following patient groups: age less than 45 years , female and non-drinking .EMT has been widely studied in the metastatic process of epithelial malignancies . We therP=0.0347). Meanwhile, the CT haplotype of the rs2285681/rs2285682 haplotype block is associated with significant differences between CHB+LC and HCC patients (P=0.0123). However, no significant correlations were identified between other observed SNPs.Haplotype block LD mapping demonstrated that the rs2285681 and rs2285682 SNPs are in tight LD in a 0-kb sequence, while the rs4647958 and rs1543442 SNPs are in tight LD in a 4-kb sequence . For HCC and health subjects as comparative groups, gender in one-locus models was the best, while the balanced accuracy (BA) for testing the dataset was 60.55% and the CVC was 10/10. For HCC and CHB+LC as comparative groups, the combination drinking, smoking in the two-locus model was the best, while the BA was 57.48% and the CVC was 9/10. For CHB+LC and health subjects as comparative groups, the combination We hypothesized that EMT genes play an important role in HCC and chronic HBV infection, and that environment-gene and gene-gene interactions are important. We found significant genetic associations for single EMT genes with HCC and chronic HBV infection, as well as environment-gene and gene-gene interactions. The MDR results indicated that interactions of environment-gene and gene-gene contribute significantly to HCC and chronic HBV infection, even when individual EMT genes do not.SNAIL1 rs4647958T>C, ZEB2 rs3806475T>C and TWIST1 rs2285681G>C SNPs are associated with increased susceptibility to both HCC and chronic HBV infection. In addition, interactions among potentially related polymorphic sites were associated with the development of HCC through the MDR method. MDR is a suitable method to analyse environment-gene and gene-gene interactions by reducing multi-locus genotypes into high-risk and low-risk groups in case-control studies was the best for BA (56.99%) and the CVC was 9/10.For HCC and health subjects as comparative groups, the one-locus model was found to be optimal for the prediction of HCC in terms of BA (60.55%). However, the BA values between the two- and three-locus combination models in HCC didn\u2019t have any meaningful difference. For HCC and CHB+LC as comparative groups, the two-locus model was found to be the best in terms of BA (57.48%). Similarly, the BA values between the one- and three-locus combination models didn\u2019t have any meaningful difference. For CHB+LC and health subjects as comparative groups, the four-locus combination model are associated with the development of a more aggressive form of HCC in non-smokers, ZEB2 genotypes (rs3806475) are associated with increased risk of HCC development in non-drinkers, and TWIST1 genotypes (rs3806475) are associated with increased risk of CHB and LC. Meanwhile, the SNAIL1 SNPs (rs4647958) are correlated with HCC stages in smokers, though not significantly. SNAIL1 is an important factor involved in inducing and promoting EMT. SNAIL1 is also involved in the pathogenesis of hepatitis B virus mutations in HCC patients. Our findings are remarkably consistent with previously published studies. Chen et\u00a0al. is functional and contributes to increased risk of HCC and chronic HBV infection.The stratified analysis showed that n et\u00a0al. found tho et\u00a0al. used immm et\u00a0al. found thg et\u00a0al. demonstrogenesis . These pTWIST1 protein can regulate the expression of many specific genes and participates in many different biological processes required for normal growth and development sequence in the promoter of the E-cadherin gene, causing epithelial cells to lose their epithelial-like characteristics and transform into mesenchymal cells, thus leading to EMT can be found in the article/The studies involving human participants were reviewed and approved by Ethics Committee of Hebei Medical University. Written informed consent to participate in this study was provided by the participants\u2019 legal guardian/next of kin. Written informed consent was obtained from the individual(s), and minor(s)\u2019 legal guardian/next of kin, for the publication of any potentially identifiable images or data included in this article.Conceptualization: W-XL, LY, XG, and D-WL. Data curation: L-NY, NM, X-LZ, and L-MT. Data analysis: XG, W-XL, LY, and D-WL. Funding acquisition: XG and D-WL. Investigation: LY, H-MY, L-NY, NM, X-LZ, L-MT, and XG. Methodology: XG, D-WL, and L-MT. Project administration: D-WL, XG, and LY. Resources: XG and D-WL. Supervision: XG, D-WL, W-XL, and LY. Writing the original draft: XG and D-WL. Reviewing and editing: W-XL, LY, H-MY, L-NY, NM, X-LZ, L-MT, XG, and D-WL. All authors contributed to the article and approved the submitted version.This study received financial support from Department of Education of Hebei Province , Science and Technology Bureau of Hebei Province (grant number 17272407), National Natural Science Foundation of China (grant number 81601876), and National Natural Science Foundation of Hebei Province (grant number H2019206528).The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Objectives: The assessment of health-related quality of life (HRQoL) is becoming increasingly important in companion animals. This study describes a systematic review and development of a proposed conceptual framework to assess HRQoL in cats with osteoarthritis (OA).Methods: The conceptual framework was developed according to published guidelines. A comprehensive search of the CAB Direct, Scopus, PubMed, and Web of Science databases was carried out for publications in English from inception to November 12, 2019. Search words used were \u201ccat\u201d, \u201cfeline\u201d, \u201cchronic pain\u201d, \u201cpain\u201d, and \u201cquality of life\u201d. Publications were selected if they were full-text and peer-reviewed, based on primary data, and identified or measured behavioral symptoms of chronic musculoskeletal pain in cats. A systematic review was conducted according to Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. A data extraction form was developed from categories identified in the literature review and piloted on a small number of studies to ascertain the appropriateness for relevant data extraction. Categories were then finalized, and key domains were identified. The domains were then synthesized to develop a conceptual framework.Results: A total of 454 studies were identified, of which 14 met the eligibility criteria and were included in the meta-synthesis. All 14 were assessed to be of good quality. Seven domains related to HRQoL in cats with OA were thematically identified from the data: mobility, physical appearance, energy and vitality, mood, pain expression, sociability, and physical and mental wellbeing. The three main HRQoL domains were pain expression, mobility, and physical and mental wellbeing, which impacted all the others. Pain and mobility impacted all six other domains, with increased pain and decreased mobility negatively impacting physical appearance, energy and vitality, mood, sociability, and physical and mental wellbeing.Conclusions and Relevance: This is the first study to develop an evidence-based conceptual framework for the assessment of HRQoL in cats with OA. The proposed conceptual framework suggests that effective management of chronic pain in cats may improve their overall HRQoL. The feline medicalized population is an aging population, primarily due to increasing life expectancy in companion animals as a benefit of improved animal healthcare , 2. One OA is associated with chronic pain, which, in turn, can negatively impact health-related quality of life (HRQoL) . HRQoL iThe assessment of HRQoL is becoming increasingly important in companion animals . HRQoL cIn general, the assessment of chronic pain in cats with OA is not straightforward as lameness is uncommon . SymptomTherefore, there is need for research that identifies the factors that contribute to HRQoL in cats with OA. This will enable evaluations on how OA impacts HRQoL and, ultimately, the effective assessment of treatment interventions for cats with this condition. The aim of this study was to propose a conceptual framework for a tool that can assess HRQoL in cats with OA.The conceptual framework was developed according to published guidelines , 12. DevMapping of data sources was accomplished first by conducting a systematic review to identify the relevant literature. The review was conducted in accordance with the principles set out for reporting systematic reviews and meta-analyses, Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines .Studies were eligible for inclusion in the meta-analysis if the following criteria were met: domestic cat was the main population; full-text, peer-reviewed journal article; articles based on primary data; measured/identified behavioral symptoms of chronic musculoskeletal pain; and articles were written in English. Articles were excluded if domestic cat was not the main population; article was not a full-text peer-reviewed journal article ; articles based on secondary data, e.g., reviews; cats were anesthetized; medical/physiological investigations of chronic MSK pain; article was not concerned with chronic MSK pain ; focused on treatment/drug efficacy on chronic MSK pain; or was not written in English.via the following electronic databases: CAB Direct, Scopus, PubMed, and Web of Science. The reference lists of all included studies, any relevant systematic reviews, and key background papers were scrutinized to identify any additional relevant studies. The keywords used were \u201ccat\u201d, \u201cfeline\u201d, \u201cchronic pain\u201d, \u201cpain\u201d, and \u201cquality of life\u201d. These keywords were used in various combinations with Boolean operators \u201cOR\u201d and \u201cAND\u201d used to combine search terms.A comprehensive search strategy was developed to ensure that all relevant sources of data were identified. Searches were undertaken from inception to November 12, 2019 (date of the search) All searches were exported into EndNote Web bibliography software. Duplicates of articles were removed electronically and manually. Study selection was carried out in two stages. The first stage included title and abstract screening. To ensure that the relevant studies were identified and selected, two researchers independently screened titles and abstracts for eligibility based on the inclusion/exclusion criteria (DB and TG). Full-text articles were obtained and independently screened against the eligibility criteria (DB and TG). Manual searching of the reference lists was undertaken to identify any additional studies. Any disagreements in study selection between the researchers were resolved through discussion and consultation with two members of the project team (GY and FF). Stages in the study selection are illustrated in A data extraction form was developed from categories identified during the review of the literature , 14, 15.The methodological quality of the identified studies was critically appraised using the criteria listed in the ARRIVE guideline for reporting animal research and the In this phase, a thematic analysis framework, as described by Braun and Clarke , was useEthical approval was obtained from the University Faculty Health, Psychology and Social Care Research Ethics and Governance Committee (UK): Reference Number: 10368.A total of 776 studies were identified. After removing duplicates, and applying the inclusion criteria and abstract screening, 14 studies met the inclusion criteria and wereOf the 14 studies included, six studies were from the USA , 21\u201325, The scores of the methodological quality assessment are presented . Eight sFrom the data extracted, data were categorized, and key domains were confirmed . Seven dKey domains were synthesized to develop the conceptual framework . The thrInitial validation of the conceptual framework was undertaken. The conceptual framework was shared with three key stakeholders: a veterinarian practicing in companion animals (UK based), an owner of an aged cat (15 years) (UK based), and a representative from a pharma company specializing in animal pharma (USA based), to obtain their views and feedback. They confirmed that all domains were relevant and their relationship was appropriate. No changes were requested. All participants confirmed the conceptual framework accurately and wholly reflected the domains that impact HRQoL in cats with OA. A subsequent qualitative concept elicitation study with key informants including veterinarians and cat owners will help us confirm that the HRQoL domains and areas of overlap in our model reflect real-world experiences. Thus, our framework may be considered preliminary until we can further validate it with qualitative evidence.To the authors' knowledge, this is the first study to develop an evidence-based conceptual framework for the assessment of HRQoL in cats with OA. To achieve this, a systematic review was undertaken to identify the relevant literature to develop the conceptual framework. Fourteen articles were identified, all of which were assessed as good quality. From the data extracted, seven domains were identified in relation to HRQoL in cats with OA. These domains were mobility, physical appearance, energy and vitality, mood, pain expression, sociability, and physical and mental wellbeing. Of these, three domains, pain expression, mobility, and physical and mental wellbeing, were identified as being more important to HRQoL in cats with OA. Feline physical and mental wellbeing was impacted by five of the six domains. This preliminary conceptual framework was found to be a valid representation of HRQoL in cats with OA. A similar conceptual framework development approach has recently been described in human and animal health , 31.There is evidence that the relationship between pain and QoL is complex . This stThere is a lack of research that has investigated HRQoL with OA in cats. From the systematic review, only 14 studies were identified. Of these, only one study looked aThe remaining studies focused on pain and DJD or OA , 25, 29,There were some limitations to this study. Only articles published in English were included; therefore, there is the possibility that some further data were available to inform the conceptual framework. However, preliminary review of the conceptual framework suggests that it was a comprehensive and accurate reflection of the domains that impact HRQoL in cats with DJD. A subsequent qualitative concept elicitation study with key informants including veterinarians and cat owners will help us confirm that the HRQoL domains and areas of overlap in our model reflect real-world experiences. Then, the development of an evidence-based tool to assess HRQoL in cats with OA can be undertaken.This study has developed a conceptual framework for a tool that can assess HRQoL in cats with OA. Seven domains were identified in relation to HRQoL in cats with OA: mobility, physical appearance, energy and vitality, mood, pain expression, sociability, and physical and mental wellbeing. The three main domains were pain expression, mobility, and physical and mental wellbeing. This conceptual framework suggests that effective management of chronic pain in cats with OA may improve their HRQoL. However, other domains can also negatively impact HRQoL. The findings of this study can be used to inform the development of an evidence-based tool to assess HRQoL in cats with OA.The original contributions presented in the study are included in the article/GY: study design, data analysis, and manuscript development. DB and TG: data collection, data analysis, and final report preparation. FF: study design, data collection, data analysis, and manuscript development. AW: study design, quality assurance, and manuscript preparation. KM: quality assurance and manuscript preparation. IO: study design and manuscript development.The authors declare that this study was funded by Zoetis.AW, KM, and IO work for Zoetis that funded the developed the conceptual framework. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher."} +{"text": "An owner's ability to detect changes in the behavior of a dog afflicted with osteoarthritis (OA) may be a barrier to presentation, clinical diagnosis and initiation of treatment. Management of OA also relies upon an owner's ability to accurately monitor improvement following a trial period of pain relief. The changes in behavior that are associated with the onset and relief of pain from OA can be assessed to determine the dog's health-related quality of life (HRQOL). HRQOL assessments are widely used in human medicine and if developed correctly can be used in the monitoring of disease and in clinical trials. This study followed established guidelines to construct a conceptual framework of indicators of HRQOL in dogs with OA. This generated items that can be used to develop a HRQOL assessment tool specific to dogs with OA. A systematic review was conducted using Web of Science, PubMed and Scopus with search terms related to indicators of HRQOL in dogs with osteoarthritis. Eligibility and quality assessment criteria were applied. Data were extracted from eligible studies using a comprehensive data charting table. Resulting domains and items were assessed at a half-day workshop attended by experts in canine osteoarthritis and quality of life. Domains and their interactions were finalized and a visual representation of the conceptual framework was produced. A total of 1,264 unique articles were generated in the database searches and assessed for inclusion. Of these, 21 progressed to data extraction. After combining synonyms, 47 unique items were categorized across six domains. Review of the six domains by the expert panel resulted in their reduction to four: physical appearance, capability, behavior, and mood. All four categories were deemed to be influenced by pain from osteoarthritis. Capability, mood, and behavior were all hypothesized to impact on each other while physical appearance was impacted by, but did not impact upon, the other domains. The framework has potential application to inform the development of valid and reliable instruments to operationalize measurement of HRQOL in canine OA for use in general veterinary practice to guide OA management decisions and in clinical studies to evaluate treatment outcomes. Osteoarthritis (OA), also known as degenerative joint disease, refers to the irreversible degeneration of cartilage and other tissues within joints. Prevalence estimates in the late 1990s have been as high as 20\u201330% of dogs over the age of 1 year . Pain asOwners may not recognize signs of OA that are presented by their dogs and this can be a major barrier to initiation of treatment . Case maHumans are generally able to assess their own quality of life. This may be performed using a patient-reported outcome measure (PROM), an instrument that allows the patient to report on aspects of their own health, such as pain and QOL , 18. NonHRQOL assessments are widely used in human medicine. The Food and Drug Administration (FDA) patient-reported outcome (PRO) guidance outlines methods for the development and validation of human PROs and HRQOL measures that can be used to support claims in medical product labelling . This guThe current study aimed to construct a conceptual framework of indicators of HRQOL in dogs with OA, focusing on the subjective experience of the dog. The CF was based upon a systematic literature review that was reviewed by an expert panel. The framework has potential application to inform the development of valid and reliable instruments to operationalize measurement of HRQOL in canine OA for use in general veterinary practice to guide OA management decisions and in clinical studies to evaluate treatment outcomes.A systematic literature review was performed in August 2020 following the Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA) guidelines . Three sQuality assessment was performed using a modified version of the STROBE guidelines . This coIndicators of osteoarthritis and/or its impact on quality of life were extracted to generate \u201citems\u201d from each study. Only indicators listed within the body of the manuscript were included. Data were initially extracted by one of four authors , with the process repeated independently by a single author (CR) for all studies. A comprehensive data charting table based on a previous review for HRQOL in cats with osteoarthritis and refiThe resulting domains and items were assessed at a workshop attended by members of the research team, a specialist in small animal internal medicine with a PhD in decision making in dogs with osteoarthritis and a veterinary surgeon who founded an initiative to provide advice and education on canine arthritis to owners and veterinary professionals. Prior to the meeting, panel attendees were provided with a PowerPoint file covering the methodology and results including a table of key domains/items and a brThe workshop took place in November 2020 online and lasted 2 1/2 h. Domains were discussed and final titles decided, with each item assessed for designation to the correct domain. Additionally, any potential missing items were discussed. Hypothesized directional interactions between the domains were identified and a visual representation of the conceptual framework was produced.A total of 1,264 unique articles were generated in the database searches and assessed for inclusion using the title and abstract . Of thesn = 11), translation (n = 2), or use (n = 3) of pain or HRQOL assessment instrument[s]. There were five instruments: the Liverpool Osteoarthritis in Dogs (LOAD) score, (The eligible studies dated from 2003 to August 2020 inclusive. Study characteristics are shown in ) score, , 46, Can) score, , 44, 46,) score, , 42, 46,) score, , 36, 37 ) score, , 45, 48.Data extraction produced 134 unique items categoriThe expert workshop resulted in four remaining categories: physical appearance, capability, behavior, and mood . The oriThe four domains were incorporated into a visual conceptual framework model . All fouThe current study aimed to construct a conceptual framework of indicators of HRQOL in dogs with OA, through a systematic literature review and expert panel, focusing on the subjective experience of the dog. This approach is consistent with FDA guidance for the Over half of the studies described the development of one of five existing instruments or their translation into other languages. These instruments are varied in their relation to HRQOL in dogs with osteoarthritis. The LOAD and HCPIMore relevant to HRQOL was the Glasgow University health-related dog behavior questionnaire (GUVQuest) which was specifically developed to assess the impact of pain, including from OA, on HRQOL , 45, 48.Three studies used one of these instruments, with one performing a welfare assessment of a population of dogs . A furthOf the remaining six studies, two reported clinical-based methods to OA assessment: mechanical joint threshold and therMore than half of the items were unique to a single manuscript. This was especially the case in the original domains of \u201cenergy\u201d and \u201ctemperament\u201d and perhaps reflects the subjectivity of these domains. There was scope for different semantic interpretations for several items because their exact meaning within the context of their respective source articles was not always clearly characterized. Combination of synonyms reduced the number of items from 134 to 47. The mobility domain had fewer unique items and contained most of the most commonly reported items. There are several potential reasons for this. It could indeed be that these are the most common issues in OA in dogs, either as the most important to owners, or the most obvious signs. This is reflected in the clinical signs reported to be useful in a presumptive OA diagnosis . HoweverThe final conceptual framework comprised four domains representing HRQOL of dogs with osteoarthritis. The final domains were mobility, behavior, mood, and physical appearance which were all deemed to be affected by pain. A negative impact of pain on QOL has been reported in humans \u201352. It iThe resulting CF is consistent with a non-peer reviewed model produced by Canine Arthritis Management (CAM) representing the impact of chronic pain on capability, behavior, muscular changes and posture . AlthougThere were some limitations to this study, perhaps the main one being that the multiple references to existing pain and HRQOL instruments in the screened papers, which limited the usefulness of quantification of item frequency in this review. This may have resulted in an overestimation of the importance of mobility and exercise-based items. However, the discussion from the expert panel allowed the review of these items and mobility and exercise (termed \u201ccapability\u201d) were deemed to be important. There may also have been bias at the review level by searching publications only in the English language. Again, the use of an expert panel allowed for the addition of any items that may have been missed in the review and no items were added.The conceptual framework developed by this study has highlighted the complexity of HRQOL in dogs with osteoarthritis, and the impact of pain on all other HRQOL domains. It can be used as a first step in the development of a disease-specific instrument to measure HRQOL in dogs with osteoarthritis, in order to encourage and better monitor treatment. A future qualitative concept elicitation study with key informants including veterinarians and dog owners would provide additional evidence to validate whether the HRQOL domains and their interrelations in our model reflect real-world experiences.RRID:SCR_005901Europe PubMed Central, The original contributions presented in the study are included in the article/AC, DB, and IO contributed to conception and design of the study. BA performed the literature search. BA, GC, SM, and LG performed sifting, data extraction, and quality assessment. Data extraction was audited by CR. AC, BA, CR, and DB planned and attended the expert workshop. ZB and HC provided expert advice at the workshop. CR wrote the first draft of the manuscript. BA wrote sections of the manuscript. All authors contributed to manuscript revision, read, and approved the submitted version.The authors declare that this study received funding from Zoetis under the auspices of a wider collaboration: the Veterinary Health Innovation Engine (vHive). In addition to the funding described above (Conflicts of Interest), the funder had the following involvement in the study: compensation to expert panel.DB and IO are employees of company Zoetis. GC, BA, and CR's positions are funded by Zoetis, LG has a PhD scholarship which is partially funded by Zoetis. AC is an academic Principal Investigator for other projects funded or co-funded by Zoetis. The attendance of ZB and HC at the expert panel was compensated by Zoetis. The remaining author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher."} +{"text": "The multigene family of boPAG belongs to the group of aspartic proteases. The accumulation and circulation in maternal blood and milk has made boPAG very useful and important for pregnancy diagnosis in cattle. The goal of the present study was to develop and validate a new Sandwich-ELISA which allows the detection of boPAG in maternal serum and whole milk. Therefore, 984 serum and 928 milk samples were collected monthly from 231 Holstein Friesian cows (Bos Taurus) from one week after insemination (p.i.) until six weeks postpartum. The ELISA is able to identify a cow as being pregnant at day 30 p.i. in serum and at day 40 p.i in milk with threshold values of 1.0 ng/ml in serum and 0.0165 ng/ml in milk. The postpartum half-life of boPAG was estimated to be 6.4 days in serum and 7.1 days in milk. The boPAG profile established during pregnancy in serum and milk showed a typical pattern. The amount of boPAG found in milk was 1.5 % of the amount of boPAG present in serum. In conclusion, a Sandwich-ELISA has been developed to quantify boPAG in serum and in whole milk simultaneously with the same test procedure. This is time saving for farmers and more efficient for laboratories.Bovine pregnancy-associated glycoproteins ( PAG) has turned out to be an alternative method for pregnancy diagnosis [An accurate and timely pregnancy diagnosis is of considerable economic relevance in livestock management, especially in the cattle industry. Traditionally, pregnancy testing is done by manual or ultrasonographic examination per rectum or with the detection of progesterone as a non-pregnancy specific marker in serum or milk. In the last three decades, the identification and immunological detection of \"pregnancy-associated glycoproteins\" (iagnosis \u20134.boPAG) is incomplete [These proteins are expressed by different cell types of the placenta. They are products of an unusual gene-family, which phylogenetically belongs to aspartic proteinases and is present in the Cetartiodactyla order . PAG arecomplete , but thecomplete , 9\u201312. Dcomplete , 13\u201315. complete . Some stcomplete , 18. ForToday, there are different ELISAs available for detection of PAG in bovine milk and serum but most of the milk ELISA use skimmed milk instead of whole milk. In some studies, PAG has been measured in unskimmed milk, but using a commercial test kit, which cannot quantify PAG concentrations in milk , 12, 19.The present study addresses the establishment and validation of a new ELISA that quantifies boPAG concentrations in blood or whole milk samples in one ELISA system within a few hours. This is time saving for farmers and more efficient for laboratories, since only one test is necessary which enables a parallel measurement of blood and milk samples.The study is in accordance with the German legal and ethical requirements of appropriate animal procedures. Animals were not purposely euthanized for this study. Tissue samples were taken during the conventional slaughter process. The consultation of the institutional Animal Welfare Body is documented under no. E5-18.n = 16) were opened approximately 20\u201330 min after killing. Thereafter, the cotyledons were dissected from the caruncula and extensively washed with 0.9 % NaCl. Subsequently, the samples were immediately stored on ice and transported to the laboratory where they were stored at -20\u00b0C until further processing. The gestation stage was estimated by measuring the crown-rump-length of the fetuses [For protein purification, cotyledon samples from different pregnancy stages were collected from an abattoir located in Germany, afterbirth cotyledons were obtained from a local dairy farm directly after calving. Uteri of pregnant cows at 4\u00b0C. The buffer tissue ratio was 5:1 (v/w). Protease inhibitors were added during the homogenization process. The mixture was stirred for 20 min at 4\u00b0C. Then the homogenate was centrifuged at 3,000 x g and 4\u00b0C for 30 min. The pellet was discarded. The supernatant was transferred in a beaker and stirred. Ammonium sulfate was slowly added to achieve 40% saturation. Thereafter the supernatant solution was gently stirred at 4\u00b0C for 1 h. Then it was centrifuged at 3,000 x g and 4\u00b0C for 30 min. Again, the pellet was discarded and the supernatant was adjusted to 80% ammonium sulfate saturation. After stirring the sample at 4\u00b0C for 1 h, it was centrifuged at 27,000 x g and 4\u00b0C for 1 h. The pellet was retained and dissolved in Tris/Cl buffer . Following this step, the sample was stored at -20\u00b0C until further analysis.The protein extraction was performed according to Zoli et al. and KlisFPLC) with the following steps was used: anion-exchange (Acetate basis), cation exchange (Acetate basis), gel filtration, cation exchange (Tris basis), hydrophobic interaction chromatography and anion exchange (Bis-Tris basis). Between the different FPLC steps the fractions were checked for boPAG content by using an available ELISA previously established by Friedrich and Holtz [2HPO4, 0.0075 M KH2PO4, 0.0135 M KCl; pH 7.3) plus 0.5 M NaCl buffer. A maximum of 10 ml per sample was loaded on the column. Fractions of 5 ml were collected. The flow rate was 2 ml/min and the absorbance was recorded at 280 nm. In the next step, all boPAG-containing and pooled fractions were subjected to a buffer exchange to cation buffer using the desalting column. Subsequently, a second cation exchange was carried out using the Source 30S packed column equilibrated with the same buffer. The bound proteins were eluted with the exponential gradient of NaCl. The protein content was monitored by measuring the UV absorbance at 280 nm. After cation exchange, ELISA-checked and pooled fractions were loaded onto a column for hydrophobic interaction chromatography after the addition of the same volume of 4 M (NH4)2SO4. Previously, the columns had been equilibrated with a buffer containing 50 mM Na2HPO4 and 2 M (NH4)2SO4 (pH 7). Proteins were eluted using a linear gradient to water. Fractions of 0.5 ml were collected and assayed. Those with high antigenic activity were pooled, and buffer was exchanged to an anion exchange buffer using a fresh desalting column. Following this, a second anion exchange was performed using a Source 30Q packed column (Tricorn 10/100 equilibrated with anion exchange buffer). After elution of the unbound proteins, the exponential NaCl gradient was applied at a flow rate of 3 ml/min. 1 ml fractions were automatically collected and analyzed by ELISA [For protein purification, a fast protein liquid chromatography . In total seven rabbits were immunized with boPAG in PBS (early pregnancy (2 rabbits), mid pregnancy (1 rabbit), late pregnancy (3 rabbits) and afterbirth (1 rabbit)). Each rabbit received 5\u20137 times multiple intra dermal injections with approximately 250 \u03bcg purified boPAG-fraction with Montanide ISA 206 as adjuvant. Blood collection were carried out two and three weeks after each antigen injection starting after the third boost and final collection after the last boost. The antisera within rabbits were pooled.Afterwards, each antibody was tested with itself and the other antibodies for coating and as biotinylated antibodies. In contrary to expectations, two different antibodies against late pregnancy PAG preparations (IgG 1438 and IgG 1440) were found to be most suitable for the development of the Sandwich-ELISA. This pair of antibodies showed the best differentiation between pregnant and non-pregnant animals in combination with high specific PAG binding and low background in the assay system.3; pH 9.6). Antibodies were purified by using rmp Protein A Sepharose Fast Flow . After overnight incubation at 4\u00b0C, the wells were blocked with washing buffer and then washed five times with 350 \u03bcl washing buffer. Plates were stabilized by using 300 \u03bcl of 20% sucrose solution. After decantation, the plates were dried at room temperature and stored with silica gel at 4\u00b0C until use. A standard stock solution of 10 ng/ml was prepared from a cotyledonary extract (crude extract) from mid pregnancy in standard buffer (PBS-T (PBS with 0.05% Tween 20), 0.1 M NaH2HPO4, 10% PAG-free bovine serum ) and stored at -20\u00b0C until use. The boPAG-content of the extract was determined with the same ELISA mentioned above [2HPO4, 10 % PAG free-bovine serum). Afterwards, two-fold serial dilutions were prepared freshly before every use. This procedure resulted in seven standards with the following concentrations: 500 pg/ml, 250 pg/ml, 125 pg/ml, 62.5 pg/ml, 31.3 pg/ml, 15.6 pg/ml and 7.8 pg/ml. Dilution buffer was used as 0-Standard (0 pg/ml) and negative control. Positive control samples were prepared from two serum or milk samples with high and midrange boPAG-concentrations. For this purpose, the serum sample with a high concentration (33.5 ng/ml) was diluted 1:100 in dilution buffer and the serum sample with a medium concentration (3.5 ng/ml) was diluted 1:20. The milk sample with a high (442.6 pg/ml) and medium concentration (48.4 pg/ml) was used undiluted. The same control samples were used on each plate throughout the experiment.The PAG-Sandwich-ELISA utilizes 96 well microtiter plates . These plates were coated with 100 \u03bcl of anti-PAG polyclonal rabbit antibody at a concentration of 1 \u03bcg/ml in coating buffer (0.05 M NaHCOp.i.)), 1:100 in mid pregnancy (>30 d p.i.) and 1:1,000 in later pregnancy (>150 d p.i.). Milk samples were used undiluted in the beginning of gestation (<160 d p.i.) and diluted 1:10 until the end of gestation (>160 d p.i.). The dilutions at the aforementioned gestation stages were necessary to allow accurate concentration measurement of the samples in the range of the standard curve. All standards, samples and controls were assayed in duplicate.Bovine serum samples were diluted in dilution buffer 1:10 in early pregnancy , 10% PAG free-bovine serum) were added into each well. Afterwards, 50 \u03bcl of prepared standard, control or pre-diluted sample were added, and the mix was incubated for 2 h in the dark and at room temperature (20\u201325\u00b0C) on a shaker (500 rpm). After incubation, the plate was washed three times with 350 \u03bcl of diluted washing buffer before adding biotin-conjugated anti-boPAG polyclonal rabbit antiserum (35 ng/ml) to the wells. The antibody was conjugated to Biotinamidohexanoic acid N-hydroxysuccinimide ester (biotin). The biotin antibody conjugate was diluted in biotinylated antibody buffer . To each well, 100 \u03bcl of diluted biotin antibody conjugate was added, followed by 30 min incubation in the dark at room temperature on a shaker (500 rpm). After three washing steps with 350 \u03bcl of diluted washing buffer, 100 \u03bcl of streptavidin conjugated to horseradish peroxidase (EC 1.11.1.7) (HRP) was added into each well. The streptavidin-HRP conjugate was diluted 1:500 in HRP buffer . The plate was incubated for 30 min in the dark at room temperature on a shaker (500 rpm), followed by 5 additional washes with 350 \u03bcl of diluted washing buffer. Then 100 \u03bcl of 3,3\u2032,5,5\u2032-Tetramethylbenzidin (TMB) substrate were added, followed by an incubation for 20 min in the dark at room temperature on a shaker (500 rpm). The enzyme reaction was stopped by the addition of 100 \u03bcl of 1 M HCl. The color changed from blue to yellow and the color intensity was measured spectrophotometrically at 450 nm with a 650 nm reference filter using an EMax Plus Microplate reader with software SoftMax Pro 6.5.1 . Automatic data reduction was done using a 4-parameter logistic (4-PL) curve fit.For the first step, 50 \u03bcl of matrix solution and four milk samples , were analyzed 10 times in duplicate within one plate in three independent assays. On the other hand, the interassay variability was determined measuring two milk and serum samples in duplicate on 20 plates. These samples are the same as those used as positive controls. The detection limit was determined measuring 20 0-Standard samples in duplicate plus three standard deviations. A total of 52 assays were performed for the analysis of all samples in the validation study (26 serum assays and 26 milk assays).Bos Taurus) from one week after insemination (p.i.) until six weeks postpartum (p.p.). The last calving of the sampled cows was a minimum of 87 days ago. The animals were housed on different farms in Lower Saxony and Hesse (Germany). The blood sample and the corresponding milk sample for each animal were collected on the same day. Confirmation of pregnancy in these animals was carried out by regular analysis of all serum samples using the well-established ELISA by Friedrich and Holtz [For the validation of the assay and the establishment of PAG profiles, 984 blood and 928 milk samples were collected monthly from 231 Holstein Friesian cows were stripped from a healthy quarter before milking and stored in milk preservation tubes with ProClin as preservative at -20\u00b0C until assayed. All serum and milk samples were vortexed following thawing and before dilution or analysis steps.All blood samples in this study were collected from tail blood vessels. Approximately 12 ml of whole blood was captured in sample tubes for serum collection with separating agent . They were immediately cooled and shipped to our laboratory. Then the blood was centrifuged at 2,800 x P < 0.05.All experimental results were analyzed with R 3.6.1 . A nonlinear regression and a linear regression (for samples from gestation day 30 onwards) were used to estimate the correlation between boPAG-concentration and days after insemination. To determine the earliest gestation day at which the test can significantly differentiate between pregnant and nonpregnant animals, a one-sided ANOVA was used. Post hoc evaluation was performed with a two-sided Dunnett T-Test . The resPPV) and the negative predictive value (NPV) were calculated for various threshold values. Therefore, the R-package \u201cpROC\u201d [The clinical sensitivity, clinical specificity, the positive predictive value (e \u201cpROC\u201d was usedROC) analysis was done [AUC) was calculated to measure the ability of the test to correctly classify pregnant and nonpregnant cows. A perfect test has an AUC of 1.0.A receiver operating characteristic (was done to deterwas done . FurtherP<0.001), which is a clear indication that boPAG (and therefore pregnancy) recognition of both assay types is comparable. Recovery of serial dilution of serum and milk samples was 109.5% and 112.1%, respectively.The standard curve of the Sandwich-boPAG-ELISA showed a linear pattern. The characteristics of the ELISA are shown in 155 serum samples were collected from 62 nonpregnant animals and 666 serum samples were collected from 154 pregnant animals monthly throughout pregnancy until six weeks postpartum. These were the samples that met requirements for further analysis. Unless otherwise stated, results in the following section are presented as 10-day-means \u00b1 SEM. Individual boPAG-concentrations in the course of pregnancy are shown in P = 0.002, Dunnett-Test). This is the result of a considerably high variation in the concentrations of boPAG in serum, as indicated by the standard errors from the average values.The overall mean boPAG concentration of non-pregnant cows was 0.12 ng/ml \u00b1 0.03 ng/ml and the overall mean boPAG concentration of pregnant cows was 34.1 ng/ml \u00b1 2.1 ng/ml. 10-day means of boPAG serum concentration throughout pregnancy in comparison to the nonpregnant control group are shown in The results of the ROC analysis at various threshold values are shown in n = 9) in the period from three weeks after calving to six weeks after calving. Except for one cow, all animals were sampled only once. In total, there were four samples from the third week p.p. , one sample from the fourth week p.p. (day 25), one sample from the fifth week p.p. (day 35) and three samples from the sixth week p.p. . The mean boPAG concentration in the post-partum period is 135.7 \u00b1 38.4 ng/ml. The average concentration of PAG in serum decreases from 246.1 \u00b1 36.9 ng/ml in week three after parturition to 31.6 \u00b1 7.2 ng/ml by post-partum week six. A simple linear regression model was fitted to the data after ln transformation .Individual concentrations of boPAG in milk during pregnancy are shown in The results of the ROC analysis for milk samples at various threshold values are shown in In the post-partum period (three weeks p.p. to six weeks p.p.), the mean milk boPAG-concentration was 1.39 \u00b1 0.41 ng/ml. The samples were collected from the same animals as described in the serum part. The mean milk boPAG concentration declined from 2.27 \u00b1 0.5 ng/ml three weeks p.p. to 0.3 \u00b1 0.06 ng/ml six weeks p.p. A simple linear regression model with ln-transformed boPAG as dependent variable was used to estimP<0.001) in pregnant and r = 0.11 (P = 0.17) in nonpregnant cows to realize this intent. The detection of pregnancy specific proteins in serum or in milk in bovine species is a well-established diagnostic method , 22, 28.Therefore, the goal of the present work was the development of an ELISA that is able to quantify boPAG concentrations in serum and milk simultaneously. To the best of our knowledge, only a few assays are described in literature, which are able to quantify PAG in bovine milk , 31, 32.For our newly developed Sandwich-ELISA we decided to use polyclonal instead of monoclonal antibodies, as they have several advantages for our particular application. In cattle, roughly 20 different PAG members and related paralogs are known with largely varying temporal and spatial expression and glycosylation patterns during gestation , 7, 8. TThe validation of our newly developed boPAG-Sandwich-ELISA was performed on the basis of serum and milk samples from 216 cows throughout gestation. The earliest possible detection of boPAG in serum with our Sandwich-ELISA turned out to be day 17 p.i.. At this time point the concentration of boPAG in serum is very low 35 pg/ml). Such low concentrations of boPAG in early pregnancy should be looked at with caution. Zoli et al. supposed pg/ml. SIn our study, we found a linear decline in ln-serum boPAG concentration in the postpartum period. Following a first-order elimination pattern, the postpartum serum half-life of boPAG was estimated to be 6.4 days. These findings are in line with those reported by other working groups with described half-lives ranging from 4.3 days to 8.86 days , 22, 38.Studies about PAG-concentration in milk of cattle throughout pregnancy are rare in literature. There are only a few assays described, which quantify boPAG in milk , 32, 39.With aid of our newly developed Sandwich-ELISA boPAG-detection in whole milk is possible as early as day 26 p.i. At this time point, the concentration of boPAG in milk is 41.2 pg/ml. This is an earlier time point as described by Friedrich and Holtz , who detth week of pregnancy, through 0.20 ng/ml on average on day 119 of pregnancy, 1.28 ng/ml on day 168 p.i., to 4.84 ng/ml on day 201 p.i.. Furthermore, in their study they found a similar surge of boPAG concentration in milk around day 150 of pregnancy. Overall, we can conclude from these results, that our test found nearly the same pattern of boPAG in milk but lower concentrations. This may be due to the fact, that we used whole milk instead of skimmed milk. Since PAG are water soluble and associated with the aqueous portion of the milk, fat in whole milk may act as a source of interference [The boPAG profile obtained in the Sandwich-ELISA shows lower concentrations in milk compared with other assays used for boPAG quantification in milk , 32, 39.rference . NeverthIn our study, we found a linear decline in ln-milk PAG concentration in the postpartum period as already described for serum samples. Following a first-order elimination pattern, the postpartum half-life of PAG in milk was estimated to be 7.1 days, which is quite similar to the half-life found in serum and it seems that there is no difference in the elimination rate of boPAG in serum and milk. However, the only difference is the lower concentration of boPAG in milk compared to serum which makes them undetectable for other assays around day 30 postpartum , 42. WitP<0.001). The amount of PAG found in milk was 1.5 % of the amount of PAG present in serum, but the profiles were nearly parallel with exception of an aberration in late pregnancy. In literature information about the correlation between milk and blood PAG concentration is manifold ranging from 0.64 , 0.70 (estation ) and 0.7estation ) to 0.81estation ). ReasonThere are also different information in literature about the amount of PAG in milk compared to blood ranging from 0.6 % - 16.7% in quantitative assays , 32, 39 In conclusion, a new Sandwich-ELISA was developed for the detection of boPAG in serum and milk of pregnant cattle within one system. This is time saving for farmers and more efficient for laboratories. From fourth week after insemination onwards, the Sandwich-ELISA was able to identify a cow as being pregnant with high sensitivity and specificity . The detected boPAG-profile showed a typical pattern as described by other studies. With the possibility to measure boPAG concentration in whole milk, stressful effects during sampling (e.g. of venepuncture) are avoided and there is no need for special equipment or experience. Furthermore, to the best of our knowledge, only a few assays are described, which are able to quantify PAG in bovine milk and there is only one commercially available ELISA for detection of PAG in bovine milk which is designed as qualitative ELISA. The use of a quantitative assay has some advantages over a qualitative assay in research and clinical purposes. The quantification of an analyte gives more detailed information about the concentration range in different physiological states and over time (e.g. during pregnancy). Furthermore, it allows the comparison of concentrations between different individuals. This is not possible with qualitative or semi quantitative results.For the reasons mentioned above, the described quantitative Sandwich-ELISA could be a very useful tool for pregnancy diagnosis in cattle.S1 Fig2 = 0.91, P<0.001).The correlation was estimated with a linear regression Click here for additional data file.S2 FigThe Youden\u2019s index was used to find the best cutoff value that optimizes sensitivity and specificity (indicated by the red dot).(TIF)Click here for additional data file.S3 FigThe Youden\u2019s index was used to find the best cutoff value that optimizes sensitivity and specificity (indicated by the red dot).(TIF)Click here for additional data file.S1 Table(PDF)Click here for additional data file.S2 Table(PDF)Click here for additional data file.S3 Table(PDF)Click here for additional data file.S4 Table(PDF)Click here for additional data file.S5 Table(PDF)Click here for additional data file.S6 Table(PDF)Click here for additional data file.S7 Table(PDF)Click here for additional data file.S8 Table(PDF)Click here for additional data file.S9 Table(PDF)Click here for additional data file.S10 Table(PDF)Click here for additional data file."} +{"text": "Although long-term separation has made discrepancies between parents\u2019 educational aspirations and children\u2019s own educational expectations among families with left-behind children (LBC), limited researches on the influence of these discrepancies on children\u2019s mental health are carried out at present. Based on China Family Panel Studies (CFPS) conducted in 2018, we selected 875 LBC aged 9~15 as the sample, explored the influence of the direction and degree of these discrepancies on LBC\u2019s depressive symptoms by hierarchical regression, and examined the mediating role of children\u2019s academic self-efficacy and mediation effect pathway with Baron and Kenny method and Bootstrap mediation analysis methods. Results showed that LBC\u2019s mental health was worse when parents\u2019 educational aspirations were higher than their children\u2019s educational expectations, compared to that without discrepancies. The degree of such discrepancies was negatively associated with LBC\u2019s mental health. In the relationship between the direction of discrepancies and LBC\u2019s depressive symptoms, academic self-efficacy played a mediating role partially. In addition, the study indicated that mothers played a significant role in the development of LBC\u2019s mental health. These findings also provided critical evidence for the intervention practice of LBC\u2019s mental health. China\u2019s rapid urbanization has caused large numbers of rural residents migrating to urban areas for work, resulting in the emergence of a potentially vulnerable sub-population, left-behind children (LBC) ,2. LBC rResearches on the mental health of left-behind children indicated that factors such as children\u2019s age , gender As for educational aspiration and educational expectation, existing researches distinguish the difference between the two concepts: educational aspiration refers to one\u2019s hope or goal on future educational achievements , while eSince educational expectations and education aspirations are expectations or hopes of future educational achievements, researchers mainly take children\u2019s academic performance as the outcome variable when exploring discrepancies on educational expectations or aspirations . For exaHowever, as we paid more attention to children\u2019s psychological problems in demography, sociology and other disciplines and tried to determine the stressors affecting children\u2019s mental health , researcAlthough prior researches discussed above have measured academic performance as outcome variables when exploring educational expectations/aspirations discrepancies, insufficient work has focused on the influence of mental health, or mental health was only considered as an \u201cadditional\u201d outcome variable. Given that one\u2019s psychological well-being in childhood is related to that in adulthood , we inveIn addition to exploring the direct effect of discrepancies between parents\u2019 educational aspirations and children\u2019s educational expectations on LBC\u2019s psychological well-being, the research aimed to examine the possible mediating variable and its effects. Previous studies have shown that there was a mediating effect of children\u2019s self-efficacy between discrepancies and children\u2019s academic performance . Self-efHypothesis\u00a01\u00a0(H1).The direction of discrepancies between parents\u2019 educational aspirations and children\u2019s educational expectations would be positively associated with LBC\u2019s depressive symptoms. In other words, the situation that parents\u2019 educational aspirations are higher or lower than their children\u2019s educational expectations would affect LBC\u2019s depressive symptoms significantly.Hypothesis\u00a02\u00a0(H2).The degree of discrepancies between parents\u2019 educational aspirations and children\u2019s educational expectations would be positively correlated to the depressive symptoms of LBC, namely, the greater the discrepancy between parents\u2019 educational aspirations and children\u2019s educational expectations is, the more depressive LBC are.Hypothesis\u00a03\u00a0(H3).Academic self-efficacy would play a mediating role in the relationship between discrepancies (direction or degree) and LBC\u2019s depressive symptoms.The research was based on the data from Chinese Family Panel Studies (CFPS) conducted in 2018. CFPS is a nationwide longitudinal social survey conducted by Institute of Social Science Survey in Peking University, using computer-aided survey technology in interviews. In CFPS, participants were conducted by face-to-face interviews aided by computer-assisted personal interviewing (CAPI) technology, computer-assisted telephone interviewing (CATI) technology and computer-assisted Web interviewing (CAWI) technology. It is designed to collect the information of all family members, including family structure, migration status, educational situation, physical and mental health, children\u2019s development, etc. The information covers 25 provinces/autonomous regions, representing 94.5% of the population in mainland China. The sex-age pyramid structure of data is almost in line with that of the census, with nationwide representativeness . CFPS collected data from 14,296 households in 2018, divided into databases of children, adults and families. In this study, the sample came from children database aged 9 to 15. 1025 samples of LBC were selected according to \u201cthe status of current household registration \u201d and \u201cduration of living with parent(s) in the past 12 months (less than 6 months)\u201d. Considering the severe impact of parents\u2019 divorce or death on LBC\u2019s psychological well-being, we excluded these two factors to examine the influence of discrepancies between parents\u2019 educational aspirations and children\u2019s educational expectations on LBC\u2019s mental health. After excluding the samples of parents\u2019 divorce, death (130 samples) and invalid ones (20 samples) as well as matching with the database of adults and the database of family according to family ID, the final sample came from 875 groups of LBC aged 9 to 15 and their parents in rural China. The reliability factor Cronbach\u2019s \u03b1 was 0.8401.Depressive symptoms. CES-D was used to analyze the mental health level of LBC in rural areas. Developed by Radloff in 1977, CES-D consists of 20 items , which are assessed according to the frequency of occurrence in the past week ,38. CFPSAs for the independent variables, \u201cparents\u2019 educational aspiration\u201d corresponded to \u201cchildren\u2019s educational level to be hoped\u201d in the questionnaire, and \u201cchildren\u2019s educational expectation\u201d corresponded to \u201cthe least educational level to be achieved\u201d answered by children. In terms of the differences between parents\u2019 educational aspirations and children\u2019s educational expectations, we simultaneously considered the two aspects as below: the first one was the direction of the differences between parents\u2019 educational aspirations and children\u2019s educational expectations; the second one was the degree of the differences between parents\u2019 educational aspirations and children\u2019s educational expectations.IV1: \u201cThe direction of discrepancies between parents\u2019 educational aspirations and children\u2019s educational expectations\u201d. It was divided into three categories: (1) parents\u2019 educational aspirations were higher than children\u2019s educational expectations; (2) parents\u2019 educational aspirations were lower than children\u2019s educational expectations; (3) parents\u2019 educational aspirations were the same as children\u2019s educational expectations.IV2: \u201cThe degree of discrepancies between parents\u2019 educational aspirations and children\u2019s educational expectations\u201d . We re-coded education levels by years of education: illiterate/semi-literate = 0, elementary school = 6, junior high school = 9, senior high school = 12, college = 15, undergraduate =16, master = 19, doctor = 22. By comparing \u201cparents\u2019 educational aspirations\u201d with \u201cchildren\u2019s educational expectations\u201d, the absolute value of discrepancies between parents\u2019 educational aspirations and children\u2019s educational expectations was calculated to represent \u201cthe degree of discrepancies\u201d.In addition to the direct effect of the direction and the degree of the differences between parents\u2019 educational aspirations and children\u2019s educational expectations on LBC\u2019s depressive symptoms, the mediating effect of the differences between parents\u2019 educational aspirations and children\u2019s educational expectations on LBC\u2019s mental health was also studied. Although Pintrich & Elisabeth (1990) has conducted academic self-efficacy scale , limitedIndividual-level factors of LBC, including age, gender , constant communication with parents .Family-level factors of LBC, including migration types of parents , fathers\u2019 education level, mothers\u2019 education level, annual income of family. After controlling the effect of \u201cindividual-level factors\u201d and \u201cfamily-level factors\u201d on LBC, the study first conducted hierarchical regression analysis to examine whether \u201cthe direction of discrepancies between parents\u2019 educational aspirations and children\u2019s educational expectations\u201d and \u201cthe degree of discrepancies between parents\u2019 educational aspirations and children\u2019s educational expectations\u201d had significant effects on LBC\u2019s mental health level.Subsequently, the study constructed three regression models to test the mediating effect according to the procedure outlined by Baron and Kenny . First, As shown in On average, education years of parents\u2019 aspirations and children\u2019s expectations were 15.48 years (college or above) (SD = 2.35) and 14.31 years (high school or above) (SD = 3.09), respectively. Specifically, 76.2% of parents wished their children to obtain a college degree or above, while only 58.3% of children expected that themselves. In terms of discrepancies between parents\u2019 aspirations and children\u2019s expectations, the average absolute value of discrepancies was 2.27 years (SD = 2.76). More than half of parents\u2019 educational aspirations were the same as their children\u2019s educational expectations (50.46%). 35.85% of parents reported higher educational aspirations than their children\u2019s educational expectations, while 13.69% of parents reported lower aspirations than that of their children. As for LBC\u2019s depressive symptoms, the average CES-D score was 30.24 (SD = 6.14). Over half of LBC\u2019s CES-D score (63.20%) exceeded the standard score of 28, suggesting that they were in the tendency of depression.Since more than 60% of LBC were considered to be psychologically depressed, the study further analyzed group differences of mental health among LBC with different genders and migration types of parents see . The resp < 0.01), suggesting that children left behind by both parents had the most severe depression compared to children left behind only by fathers.The analysis on the depressive symptoms of LBC with different migration types of parents showed that LBC with both parents\u2019 migration got the highest CES-D score, followed by LBC with only mothers\u2019 migration, and LBC with only fathers\u2019 migration got the lowest CES-D score. Among them, there was a significant difference between the CES-D score of LBC with fathers\u2019 migration and that of LBC with both parents\u2019 migration . However, the effect of the direction that \u201cparents\u2019 aspirations were lower than children\u2019s expectations\u201d on LBC\u2019s mental health was not statistically significant, indicating that part of Hypothesis 1 has been supported. IV2 (the degree of discrepancies) was statistically positively associated with LBC\u2019s depressive symptoms (p < 0.05), suggesting that when the discrepancies between parents\u2019 educational aspirations and children\u2019s educational expectations increased by every 1 year, the depression tendency of LBC would increase by 28.7% (p < 0.05), which verified Hypothesis 2. In addition, compared with children left behind by fathers, psychological well-being of children left behind by both of parents was significantly worse (p < 0.01). Moreover, the educational level of mother was found statistically significant in explaining the dependent variable (p < 0.001), suggesting that the higher the mother\u2019s educational level was, the better mental health LBC had. Age, sex, communication with parents, father\u2019s education level and family income level did not show significant effects as hypothesized.Analysis results presented that the direction of the differences between parents\u2019 educational aspirations and children\u2019s educational expectations and the degree exerted significant effects on LBC\u2019s mental health after controlling individual and family factors. Compared with children whose expectations were the same as their parents\u2019 aspirations, depressive symptom level of children whose parents held higher aspirations increased by 1.392 had a partial mediating function on the relationship between IV1 (the direction of discrepancies) and LBC\u2019s depressive symptoms. No mediating effect was found between IV2 (the degree of discrepancies) and LBC\u2019s depressive symptoms. The influencing path of the two dimensions of the differences between parents\u2019 educational aspirations and children\u2019s educational expectations on LBC\u2019s depressive symptoms was shown in The mediating effect was tested by BK method see and stanThe research explored the relationship between discrepancies and mental health of left-behind children in rural China, as well as the mediating effect of children\u2019s academic self-efficacy in this relationship.Firstly, the proportion of parents who held higher aspirations than their children\u2019s expectations was higher among left-behind children in rural areas compared to general children. Compared with children in CFPS 2018, this proportion was even higher than that in the overall statistics (35.85% vs. 19.62%). Some researches indicated a positive correlation between the socioeconomic status of families and parents\u2019 educational aspirations , suggestSecondly, the direction and degree of the discrepancies between parents\u2019 educational aspirations and children\u2019s educational expectations significantly affected LBC\u2019s mental health. On the one hand, after controlling individual and family variables, LBC whose parents held higher aspirations were less healthy psychologically as compared with LBC whose expectations were the same as parents\u2019 aspirations. The results of some previous researches noted that the differences between parents\u2019 educational aspirations and children\u2019s educational expectations were beneficial to children\u2019s school performance in the early stage, especially when parents provided adequate supports and involvement as well as conveyed positive and confident values to their children . HoweverOn the other hand, the larger the discrepancies between parents\u2019 educational aspirations and children\u2019s educational expectations were, the less psychologically healthy LBC were. According to Higgins\u2019 self-discrepancy theory, every individual has three selves: the actual self, the ideal self and the ought-to self . Each seThirdly, academic self-efficacy played a partial mediating role between IV1 and LBC\u2019s depressive symptoms. In other words, when parents\u2019 educational aspirations were higher than their children\u2019s educational expectations, LBC held the most negative academic self-efficacy, and then LBC had the lowest mental health level; when parents\u2019 educational aspirations were the same as their children\u2019s educational expectations, LBC held most positive academic self-efficacy, and then LBC had the highest mental health level. The existing work of Spera et al. has demoLast but not least, this study also indicated the key role of mothers in the development of LBC\u2019s psychological well-being. Both the results of group differences analysis and hierarchical regression analysis demonstrated that LBC with only fathers\u2019 migration, namely looked after by their mothers, were the most psychologically healthy. In hierarchical regression analysis, the control variable mothers\u2019 education level significantly affected the mental health of LBC, which was consistent with findings of previous researches that mothers played an essential role in the growth of children ,49. SpecLimited by the questionnaires of CFPS 2018, the study had some limitations. Firstly, control variables were set from individual- and family-level according to the characteristics of questionnaires to examine effects of discrepancies between parents\u2019 educational aspirations and children\u2019s educational expectations on LBC\u2019s mental health level. However, the mental health of left-behind children in rural areas is also affected by many other factors, such as parents\u2019 divorce or death , school Despite these limitations, the study extended the specific dimensions of the differences between parents\u2019 educational aspirations and children\u2019s educational expectations (the direction and degree), and expanded the existing studies from the following two aspects: First, from the perspectives of educational aspirations and educational expectations, it emphasized the influence of differences in educational aspirations and educational expectations between different subjects on LBC\u2019s mental health. Second, previous researches have examined the relationship between children\u2019s educational aspirations/expectations and academic self-efficacy or the relationship between academic self-efficacy and children\u2019s depressive symptoms. Our findings demonstrated the pathway among educational aspirations/expectations, academic self-efficacy and mental health, that is, the direction of differences between parents\u2019 educational aspirations and children\u2019s educational expectations exerted an influence on LBC\u2019s self-efficacy, and then affected LBC\u2019s psychological well-being. Furthermore, this study provided critical evidence for the intervention practice of LBC\u2019s mental health. First, open communication between parents and LBC is very necessary. Parents should communicate with their children as much as possible to know children\u2019s actual academic performance and provide them with more information about employment, to narrow the aspirations-expectations gap and relieve LBC\u2019s psychological pressure. Second, educators should cultivate LBC\u2019s communication ability and problem-solving ability to improve their self-efficacy and self-confidence, so as to improve LBC\u2019s internal motivation. Finally, more measures should be taken to support LBC\u2019s psychological well-being. Professional psychological counseling, family training and learning guidance are suggested for LBC and their surrogate caregivers in rural China."} +{"text": "Atrial fibrillation (AF) is a risk factor for cognitive dysfunction. Although catheter ablation (CA) is one of the main treatments for AF, whether it can improve cognitive function in patients with AF remains unclear. We conducted a systematic review and meta-analysis to evaluate the cognitive outcome post-CA procedure.I2. A random-effects model was used to incorporate the potential effects of heterogeneity. The Newcastle-Ottawa Scale (NOS) was used to assess the methodological quality of each included study, and the Grading of Recommendations Assessment, Development and Evaluation (GRADE) method was adopted to evaluate the quality of evidence.Two investigators independently searched the PubMed, EMBASE, Web of Science, CNKI, WanFang, and VIP databases from inception to September 2021 for all the potentially eligible studies. The outcomes of interest included dementia or cognitive disorder through scoring or recognized classification criteria. Heterogeneity was determined by using Cochrane's Q test and calculating the p = 0.003 I2 = 40%]. Significant differences were observed in the incidence of new-onset dementia ; the changes in the Montreal Cognitive Assessment (MoCA) score and Mini-Mental State Examination (MMSE) score and changes in cognitive function scores between the radiofrequency group and cryoballoon group . The NOS indicated that included studies were moderate to high quality, while the quality of evidence assessed by GRADE was low in 2 and very low in 2.Thirteen studies including 40,868 patients were included, among which 12,086 patients received AF ablation. Meta-analysis indicated that patients with AF ablation had a lower risk of dementia incidence in comparison to patients with AF without ablation [hazard ratio (HR): 0.60, 95% CI: 0.43 to 0.84, We analyzed the related cognitive outcomes after AF ablation. In the overall population, AF ablation had a positive trend for improving cognitive function at >3 months post-procedure. However, AF ablation might not be related to the improvement of cognitive function at < 3 months.https://www.crd.york.ac.uk/PROSPERO/, identifier: CRD42021285198. Atrial fibrillation (AF) is the most common of all sustained arrhythmia with a worldwide prevalence of around 46.3 million individuals in 2016, the majority of whom are older adults .There is increasing evidence pointing to dementia and cognitive disorder as additional adverse outcomes associated with AF. A recent meta-analysis showed that patients with AF had a 36% increased risk of developing dementia \u20139. OtherCatheter ablation (CA) represents the first-line therapy for treating symptomatic and drug-refractory AF . In addiOur systematic review and meta-analysis were reported according to the criteria outlined in the Meta-Analysis of Observational Studies in Epidemiology (MOOSE) and the PRISMA 2020 . This syTwo investigators (Peng-fei Chen and Deng Pan) independently and systematically searched the PubMed, EMBASE, Web of Science, CNKI, WanFang, and VIP databases from inception to 28 September 2021. The search MESH term and keywords used included \u201catrial fibrillation,\u201d \u201ccatheter ablation,\u201d \u201cradiofrequency ablation,\u201d \u201ccryoablation,\u201d \u201cdementia,\u201d \u201cdementia, vascular,\u201d \u201cAlzheimer's disease,\u201d \u201ccognitive dysfunction,\u201d \u201ccognition disorder,\u201d and \u201cmental status test.\u201d Detailed search strategies are shown in the Two investigators independently screened titles, abstracts, and full-text material to select studies that met the following eligibility criteria: (1) all participants with AF are > 18 years old, human, and without a dementia history. (2) Studies that included a group of patients with AF treated with AF ablation (including radiofrequency (RF) and cryoballoon (CY) ablation). (3) Outcomes of interest should include dementia or cognitive disorder through scoring or recognized classification criteria. (4) Observational studies or clinical trials with at least 3 months of the follow-up period were considered for inclusion. The abstracts, editorial, animal experiment, or review were excluded.Prespecified data variables were extracted independently by two investigators. General characteristics included the author, year, country, study design, sample size of participants, follow-up duration in months, history of stroke, and maximum adjusted covariates. Baseline characteristics included demographic data (age and gender), combined diseases , combined drugs (anticoagulant and antiplatelet), and CHA2DS2-VASc score. Baseline characteristics of pooled study populations were reported as median values and their interquartile ranges (IQRs).The methodological quality of the included studies was assessed according to the Newcastle-Ottawa Scale (NOS) with scoThe Grading of Recommendations Assessment, Development and Evaluation (GRADE) method was adopThe incidence of new-onset dementia including dementia Alzheimer's type, vascular dementia, senile dementia, frontotemporal dementia, dementia with Lewy bodies, and individual cognitive impairment reported in this study. The scales for evaluating cognitive function include Montreal Cognitive Assessment (MoCA) score, Mini-Mental State Examination (MMSE), and Telephone Interview for Cognitive Status-modified (TICS-m). The reliable change index was used to analyze the neuropsychological testing scores and to identify postoperative neurocognitive dysfunction (POCD).p < 0.1 was considered with statistical heterogeneity), and I2 Statistics . We adopted a random-effect model for the meta-analysis because it incorporates the potential effects of heterogeneity and therefore allows for the retrieval of more generalizable results. Sensitivity analyses by removing one individual study at a time to confirm the robustness of the results. All statistical analyses were carried out using the Review Manager 5.4 software.Hazard ratios (HRs) with 95% confidence intervals (CIs) for the incidence of dementia were extracted from published data. If adjustments were made for HRs, the most adequately adjusted HRs were extracted. For dichotomous variables, risk ratios (RRs) with 95% CIs were calculated. Continuous variables were calculated and expressed as weighted mean differences (WMDs) or standard mean differences (SMDs). Heterogeneity was assessed by using the Cochrane Q statistics, , indicating moderate to high quality. Among these outcome indicators, the quality of evidence was low in 2 and very low in 2. Certainty assessment ratings and the summary of findings are presented in p = 0.003 I2 = 40%; p all < 0.05). We also conducted a meta-analysis of 4 studies .Three studies \u201326 evalu studies \u201327 by dip = 0.002 I2 = 0%; p > 0.05). Notably, the number of patients included in the Jin et al. study was much higher than in other studies.The changes from the baseline of the MoCA score were reported in 4 studies \u201330. A sip < 0.00001 I2 = 0%; p all < 0.05).The changes from the baseline of MMSE score were reported in 3 studies , 35, 36.p < 0.00001) at 12-month follow-up.One study that incTwo studies , 34 evalp = 0.09 I2 = 50%; p=0.008 I2= 6%; p >0.05).We grouped studies \u201330 that p>0.05) in cognitive function scores between the RF group and CY group and the National Natural Science Foundation of China (Grant No. 81904025).The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher."} +{"text": "A fusion protein of interleukin-4 and interleukin-10 (IL4-10 FP) wasdeveloped as a disease-modifying osteoarthritis drug (DMOAD), andchondroprotection, anti-inflammation, and analgesia have been suggested. Tobetter understand the mechanisms behind its potential as DMOAD, thissystematic narrative review aims to assess the potential of IL-4, IL-10 andthe combination of IL-4 and IL-10 for the treatment of osteoarthritis. Itdescribes the chondroprotective, anti-inflammatory, and analgesic effects ofIL-4, IL-10, and IL4-10 FP.PubMed and Embase were searched for publications that were published from1990 until May 21, 2021 (moment of search). Key search terms were:Osteoarthritis, Interleukin-4, and Interleukin-10. This yielded 2,479 hits,of which 43 were included in this review.in vitro and in vivo, as did IL4-10FP. Both cytokines showed anti-inflammatory effects, but alsoproinflammatory effects. Only in vitro IL4-10 FP showedpurely anti-inflammatory effects, indicating that proinflammatory effects ofone cytokine can be counteracted by the other when given as a combination.Only a few studies investigated the analgesic effects of IL-4, IL-10 orIL4-10 FP. In vitro, IL-4 and IL4-10 FP were able todecrease pain mediators. In vivo, IL-4, IL-10, and IL4-10FP were able to reduce pain.IL-4 and IL-10 showed mainly protective effects on osteoarthritic cartilageIn conclusion, this review describes overlapping, but also different modes ofaction for the DMOAD effects of IL-4 and IL-10, giving an explanation forthe synergistic effects found when applied as combination, as is the casefor IL4-10 FP. Cartilage, bone, and synovial tissue show prominent structural changes in OA,and pain is the most important symptom and the reason for patients with OA to seekmedical assistance. An ideal OA treatment not only reduces symptoms but alsoprevents further structural damage by combining chondroprotective,anti-inflammatory, and analgesic effects all in 1 disease-modifying OA drug (DMOAD).None of the current potential DMOADs have yet been approved for the treatment of OAby regulatory authorities worldwide. As criteria for approval of a DMOAD, a drugneeds to provide both structural and clinical improvement.3 The persisting effort over thepast years into development of new DMOADs has generated promising leads andcandidate therapies. One of these promising approaches is the usage of anabolicstimuli.Osteoarthritis (OA) is a progressive joint disease characterized by changes inmultiple joint tissues, leading to pain, stiffness, and loss of function.5 The IL-10 receptor is composedof 2 subunits, IL-10R1 and IL-10R2, and is also expressed on immune and nonimmune cells. In the osteoarthriticjoint, increased expression of both IL-4 and IL-10 receptors has been demonstrated.For example chondrocytes express the IL-10R7 and both types ofIL-4R.7An overview of the expression of IL-4, IL-10, and their receptors (IL-4R and IL-10R)in the healthy and osteoarthritic joint is provided in Regulatory cytokines such as interleukin (IL)-4 and IL-10, well known for theiranti-inflammatory activity, are also anabolic stimulating cytokines that areproduced by a variety of immune cells. IL-4 acts via 2 types of heterodimeric IL-4receptors (IL-4R), expressed on numerous cell types, both immune and nonimmunecells. Type 1 consists of the IL-4R\u03b1 subunit and the common \u03b3 chain (\u03b3c), whereastype 2 consists of IL-4R\u03b1 and IL-13R\u03b11 chains.In vivo,IL-4 influences proteoglycan metabolism by inhibition of matrix metalloproteinases(MMPs) and prevents apoptosis of chondrocytes and fibroblast-like synoviocytes. IL-10 stimulates the synthesis of collagen type II and aggrecan, 2 importantproteins in the extracellular matrix (ECM) of cartilage; affects proteoglycanmetabolism; reduces MMPs; and, like IL-4, prevents apoptosis of chondrocytes.Importantly, IL-4 and IL-10 have chondroprotective effects. in vitro andin vivo, IL-10 was shown to increase Fc receptors on myeloidcells in the circulation of RA patients treated with IL-10. Nonetheless, in experimental in vitro and invivo models, IL-10 (and IL-4) strongly prevented inflammation-inducedcartilage degeneration, with combining both cytokines having additive effects. IL-10also directly stimulated proteoglycan synthesis in cartilage explants. Moreover, in psoriatic arthritis, significant immune modulation was foundafter subcutaneously administered IL-10; however, no beneficial effects were foundon clinical manifestations.IL-4 and IL-10 have been tested for their anti-inflammatory and chondroprotectiveeffects for the treatment of rheumatoid arthritis (RA); however, results wereinconsistent. IL-10 has some proinflammatory properties, including stimulation ofB-cell activity and upregulation of Fc receptors on antigen-presenting cells that incertain conditions could counteract its strong anti-inflammatory properties. Thisalong with the short half-life of IL-10 has been suggested as potential explanationsfor the somewhat disappointing results. Both and enhanced bioavailability.Recently, the interest in IL-4 and IL-10 as therapeutics has been fueled by thedevelopment of a fusion protein of IL-4 and IL-10 (IL4-10 FP) to promote efficacy ofthe individual cytokines by facilitating synergy, promoting unique signaling protected against blood-induced cartilage damage and inhibited production ofproinflammatory cytokines in vitro,18 attenuated cartilage damagebut not synovial inflammation in a mice model of hemophilic arthropathy, and reduced disease severity in established experimental arthritis in mice.Since its development, the effects of IL4-10 FP have been evaluated in multiplestudies and joint diseases. IL4-10 FP reversed persistent inflammatory pain inmultiple mouse models, indicating OA cartilage may become more responsive for the effects of IL4-10FP. Studies evaluating the efficacy of IL4-10 FP in OA indeed show promisingresults. IL4-10 FP has chondroprotective and anti-inflammatory effects in OAcartilage explants and chondroprotective and analgesic effects in a canine OA model, as well asin a rat OA model.20Compared with healthy cartilage, OA cartilage expresses increased levels of IL-4R and IL-10R,This systematic narrative review aims to assess the potential of IL-4, IL-10, and thecombination of IL-4 and IL-10 for the treatment of OA. It describes thechondroprotective, anti-inflammatory, and analgesic effects of IL-4, IL-10, andIL4-10 FP to better understand the mechanisms behind its potential as a DMOAD.supplementary file 1). Duplicates were removed and the remainingarticles were screened based on title and abstract by 2 reviewers (E.H. andE.M.H.). Disagreements between reviewers were resolved by consensus including athird reviewer (S.C.M.). Eligibility criteria were English language andavailability of full text. All study types, describing direct effects of IL-4,IL-10 or IL4-10 FP on OA joint tissues or patient-reported outcome measures,were included. Reviews were excluded, but their reference lists were checked foradditional articles.PubMed and Embase were searched for publications that were published from 1990until May 21, 2021 (moment of search). Key search terms were: Osteoarthritis,Interleukin-4, and Interleukin-10 toextract information on (1) cytokine of interest, (2) experimental setup, and (3)chondroprotective, anti-inflammatory, and/or analgesic effects described.Remaining articles were grouped and described per DMOAD characteristic.n = 788), 1,691 articles remained. Selection on title/abstractled to an additional 1,387 exclusions for several reasons: not written in English(n = 51), no full text available (n = 94),wrong publication type , not about OA or OA only used as control group(n = 654), and IL-4, IL-10, or IL4-10 FP not used asintervention (only as outcome measurement) (n = 378). Referencelists of excluded reviews fetched two more articles eligible for inclusion, whichled to a total of 306 articles for screening of full text. Of the 306 articles, 263did not describe an effect of IL-4 or IL-10 on OA joint tissues or patient-reportedoutcome measures and were excluded. The remaining 43 articles were included in thisreview .The initial search yielded 2,479 results. After removal of duplicates of human monocyte\u2013derivedmacrophages stimulated with IL-4 (M(IL-4)) did not affect GAG release andexpression of ADAMTS-4 and ADAMTS-5, COL2A1, MMP-1, or MMP-13.In unstimulated human OA cartilage explants, IL-4 had no effect onglycosaminoglycan (GAG) release or on expression of enzymes that can induce GAGrelease .23 increased the messenger RNA (mRNA) expression of collagentype II,2426 and lowered the releaseof total collagen. This was confirmed by immunohistochemistry. In addition, IL-4 increased the expression of proteoglycans by enhancing synthesis and reducing release. More specifically, IL-4 normalized the IL-1\u03b2 and tumor necrosisfactor-alpha (TNF\u03b1)-induced reduction of aggrecan mRNA expression,25 reducedmRNA expression of cathepsin B after cyclic tensile stress, and reduced ADAMTS-4 mRNA expression, but without effect on protein expression. IL-4 also reduced the synthesis of MMPs at mRNA and proteinlevels,2932 and inducedCBP/P300-interacting transactivator 2 (CITED2), which downregulates MMP-13. Knockdown of IL-4 in IL-1\u03b2-stimulated chondrocytes decreased cellviability and increased apoptosis, indicating an essential role of IL-4 in regulating survival ofchondrocytes. IL-4 reduced IL-1\u03b2-induced proliferation of dedifferentiatedchondrocytes, but not primary chondrocytes.27In chondrocytes stimulated with inflammatory cytokines, IL-4 treatment hadpositive effects on collagen levels. IL-4 reduced collagenaseproduction, This protective effect of IL-4 was confirmed in rat chondrocytes exposedto cyclic tensile stress, where IL-4 decreased the MMP-13 synthesis aftermechanical loading. Furthermore, neutralizing IL-4 antibodies blocked the positive effects ofmechanical stimulation (increased aggrecan and decreased MMP-13) in normalchondrocytes but not in OA chondrocytes, suggesting an important role of IL-4 inthe anabolic response to mechanical stimulation in healthy cartilage, but not indiseased cartilage.In human OA cartilage explants that were exposed to mechanical compression, IL-4protected the explants against histological degeneration and it increased thenumber of transcription factor SOX9-expressing chondrocytes compared withuncompressed cartilage explants. IL-4 did not affect SOX9 mRNA expression.in vivo rat model for OA, induced by anterior cruciateligament tear and medical meniscectomy, IL-4-transfected spheroids ofmesenchymal stem cells (MSCs) reduced chondrocyte apoptosis, signs ofhistological cartilage degeneration, and MMP-13 expression in cartilage tissue. In an IL-4 knockout mice model exposed to treadmill running, CITED2 mRNAand protein levels in cartilage tissue were lower compared with wild-type mice,while MMP-13 levels were slightly higher, confirming that CITED2 is a pivotal downstream molecule in IL-4-mediatedMMP-13 reduction. Finally, intra-articular injection with IL-4 inhibitedcartilage destruction in 2 surgically induced OA models.35In an in vivo animal models.In conclusion, IL-4 showed no chondroprotective effects in unstimulated OAcartilage explants. However, IL-4 has chondroprotective effects oncytokine-stimulated chondrocytes, mechanically stimulated human cartilageexplants, and multiple but an adenoviral vector overexpressing IL-10 slightly increased MMP-13 expression. In addition, in unstimulated human OA cartilage explants, M(IL-10) MCMhad no effect on COL2A1 expression. In line with this, overexpression of IL-10 in bone marrow mesenchymalstem cells (BM-MSCs) with Adeno-Associated Virus (AAV) IL-10 did not affect GAGrelease or content.In unstimulated chondrocytes, IL-10 did not affect MMP-13 (which cleaves collagentype II and was measured by collagenase 3) levels, Nevertheless, in isolated chondrocytes stimulated with inflammatorycytokines, IL-10 overexpression upregulated collagen type II and aggrecan,37 anddownregulated collagen type X mRNA and MMP-13 expression. Recombinant (r)IL-10 also impaired MMP-13 expression, but had no effecton TNF\u03b1-mediated aggrecan expression.In contrast, BM-MSCs reduced MMP-13 expression in IL-1\u03b2/TNF\u03b1-stimulated cartilageexplants, but BM-MSCs transduced with AAV null had the same effect, suggestingthat the positive effect is not related to IL-10 expression.39 In addition, IL-10 restored COL2A1 expression andincreased GAG content and mRNA expression of aggrecan and transcription factor SOX9.IL-10 reduced injury-induced chondrocyte apoptosis, COL10A1 and COL1A1expression, GAG release, MMP-13 synthesis, and ADAMTS-4 mRNAexpression.In vivo, in a murine collagenase-induced OA model, human MSCsoverexpressing viral IL-10, a product of Epstein-Barr virus that exhibits 84%amino acid sequence homology with human IL-10, reduced the percentage of clusterof differentiation (CD)4+ and CD8+ T-cells in popliteal lymph nodes; however,histologically, no effects on cartilage were found. In a rabbit model, the combination of IL-10 and IL-1Ra gene therapymarkedly reduced cartilage pathology and decreased proteoglycan loss, and hadgreater chondroprotective effects than either of these cytokines alone.These data indicate that like IL-4, IL-10 had no effect in unstimulatedchondrocytes and explants, with one study reporting an increase in MMP-13expression after treatment with an adenoviral vector overexpressing IL-10.However, IL-10 had chondroprotective effects in cytokine-stimulated chondrocytesand cartilage explants, and apoptosis of injury-induced chondrocytes. Moreover, IL4-10 FP increased proteoglycan synthesis and reducedproteoglycan release in OA cartilage explants,19 and normalized thereduced proteoglycan content in an in vivo canine OAmodel.20 IL4-10 FP reduced MMP-3 expression, but slightlyincreased MMP-1 expression in OA chondrocytes, whereas in synovial fibroblastsboth were reduced. The release of tissue inhibitor of metalloproteinases was not affected by IL4-10 FP.Studies in which the fusion protein of both cytokines (IL4-10 FP) was usedrevealed that IL4-10 FP normalized the OARSI cartilage structural damage scorethat was increased in the canine or rat OA Groove model.n = 35) reported on anti-inflammatoryeffects of both cytokines in OA models on multiple levels.Even more articles ( IL-4 increased IL-6 and chemokine (C-C motif) ligand (CCL2) production in OA synovial fibroblasts, indicating some proinflammatory effects in OAtissue. However, IL-4 also antagonized interferon gamma (IFN\u03b3)-inducedexpression of intercellular adhesion molecule 1 and promoted the expression of the anti-inflammatory cytokine IL-1Ra. Besides, IL-4 inhibited expression of cyclooxygenase 2 (COX-2), the mainenzyme for prostaglandin (PGE) E2 production after TNF\u03b1 stimulation, and inhibited IL-1\u03b2-induced proliferation and PGE2 production of synoviocytes. IL-4 reduced IL-1\u03b2 and TNF\u03b1 production inlipopolysaccharide (LPS)-stimulated or leukotriene B4 (LTB4)-stimulated OA synovium, but had no effect on these cytokines in unstimulated tissue.IL-4 may exert anti-inflammatory action because it is able to compromise thebinding of TNF\u03b1 to its cell-surface receptors in synovial fibroblasts. IL-4mildly upregulated cell-surface TNFR, but in addition increases the level ofsoluble TNFR-75, competing with cell-surface TNFR for binding of TNF\u03b1.2 expression,4749 suggesting a PLA2 andeicosanoid-independent mechanism of action. Indeed, addition of the nitric oxide synthesis (NOS) inhibitor \u03b9-NIOabolished the effects of IL-4, suggesting an NO-dependent mechanism of IL-4 inthe downregulation of IL-1\u03b2-induced PGE2, but no direct effect of IL-4 on NO production was found in primary humanOA chondrocytes.Next to cells from the synovial tissue, chondrocytes are able to produceinflammatory mediators as well. In unstimulated human OA chondrocytes, IL-4 hadno effect on levels of arachidonic acid, phospholipase A2 (PLA2), COX-2,microsomal PGE synthase (mPGES)-1, or PGE2450 although no effect wasfound on the IL-17-induced NO production in human OA chondrocytes. IL-4 inhibited CCL5, CCL3, and CCL4 mRNA and protein expression, but didnot affect C-X-C motif chemokine (CXCL) 1 and CXCL8 expression. Knockdown of IL-4 led to an increase in TNF\u03b1, IFN\u03b3, and IL-6 inIL-1\u03b2-stimulated chondrocytes.IL-4 inhibits NO production in chondrocytes stimulated with proinflammatorycytokines, Similarly, rIL-4 suppressed the mechanical stress\u2013induced iNOS mRNA andNO expression in rat chondrocytes.Pretreatment of rat chondrocytes with IL-4 reduced mechanical stress\u2013inducedexpression of IL-1\u03b2.2 release30 andreduced the production of IL-1\u03b2, TNF\u03b1, IL-6, and IL-8.30 Furthermore, IL-4upregulated IL-1Ra30 and insulin-like growth factor 1 30 and reduced theexpression of binding protein and receptors for IGF-1. Besides, neutralizing IL-4 prevented downregulation of NO.Transfection of IL-1\u03b2/TNF\u03b1-stimulated canine chondrocytes with IL-4 gene therapyusing a COX-2 or cytomegalovirus (CMV) promotor reduced COX-2 and mPGES-1expression, and with that downregulated PGE Intra-articular administered IL-4 MSC spheroids reduced the expression ofthe inflammatory mediators Traf6 and Tlr4.Intra-articular injection with rIL-4 in a rat OA model decreased the populationof NT-positive chondrocytes (a measurement for NO mediated tissue damage).invivo models, IL4 had anti-inflammatory effects.Altogether, in the majority of studies, IL-4 had anti-inflammatory effects oncartilage and synovium, yet some proinflammatory effects in synovial fibroblasts were also reported. In unstimulatedchondrocytes, IL-4 did not have any detectable effects, while in stimulatedchondrocytes and and another study reported less synovial changes in a murinecollagenase-induced OA model after injection with humans MSCs overexpressing vIL-10.Two studies evaluated the effects of IL-10 on histological synovitis. One studyreported no effect of IL-10 gene therapy on histological synovitis in a rabbitOA model, In synovial fibroblasts, IL-10 increased the expression of theanti-inflammatory human leukocyte antigen G and inhibited the expression of COX-2. In contrast, IL-10 had an opposing stimulatory effect on the expressionof proinflammatory cytokines IL-1\u03b2 and TNF\u03b1 in LPS-stimulated or LTB4-stimulated OA synovium. In 3-dimensional synovial micromasses generated from primary synovialcells from OA patients who were stimulated with LPS, TNF\u03b1, or IL-1\u03b2, rIL-10induced suppressor of cytokine signaling (SOCS) 3 and reduced LPS-induced IL-1\u03b2and TNF\u03b1 expression. In synovial fluid, IL-10 suppressed proliferation and IFN\u03b3 expression ofautologous T-cells.Like IL-4, IL-10 is also able to compromise the binding of TNF\u03b1 to itscell-surface receptors in synovial fibroblasts. IL-10 increases the level ofsTNFR-75 and reduces the expression of cell-surface TNFR. Similarly, in cartilage explants stimulated with IFN\u03b3 and TNF\u03b1, tosimulate inflammation, M(IL-10) MCM increased NO production. The BM-MSCs overexpressing IL-10 (using adenovirus or lentivirus) reducedIL-1\u03b2 and IL-6 expression in IL-1\u03b2/TNF\u03b1-stimulated cartilage explants. Again, BM-MSCs transfected with AAV null had the same effects, suggestingthat the anti-inflammatory effect is also not mediated by IL-10. Indeed, overexpression of IL-10 using an adenoviral vector did notinfluence IL-1\u03b2 or IL-6 production, but did, however, increase TNF\u03b1 production. Similar to IL-4, IL-10 had also no effect on levels of arachidonic acid,PLA2, COX-2, mPGES-1, or PGE2 expression in unstimulated human OAchondrocytes.4749In unstimulated OA cartilage samples, M(IL-10) MCM increased IL-1\u03b2 and SOCS1.37 IL-10 had no effect on IL-1\u03b2 or IL-17-induced NO production in chondrocytes.In contrast to unstimulated chondrocytes, overexpression of IL-10 reduced IL-1\u03b2and IL-6 expression in IL-1\u03b2-stimulated chondrocytes.In bovine cartilage explants, IL-10 did not affect basal NO expression, but itdid reverse the increase in NO and NOS2 mRNA expression after mechanical injury.IL-10 gene therapy using a CXCL10 promoter reduced IL-1\u03b2 and IL-6 expression inthe previously mentioned synovial micromasses.Overall, IL-10 has some proinflammatory effects in synovial fibroblasts andcartilage samples, but anti-inflammatory effects when chondrocytes werestimulated.in vitro, and canine IL4-10 FP reduced TNF\u03b1 production in LPS-stimulated caninewhole blood cultures. In in vivo OA models, these effects were less clear,mainly due to the usage of noninflammatory OA models.20The fusion protein of IL-4 and IL-10 reduced the release of IL-6 and IL-8 by OAsynovial tissue and cartilage explants which was increased after compression. In a rat OA model, intra-articularimplantation of IL-4 MSC spheroids reduced the expression of Scn3a and TRPV1, 2pain-related ion channels, in the spinal cord.In vitro, IL4-10 FP inhibited the release of VEGF and nervegrowth factor, 2 pain-related mediators, from OA synovial tissue. In OAcartilage, only VEGF was significantly inhibited.Only 3 studies reported on the effects of IL-4 and/or IL-10 on pain mediators.IL-4 could not restore vascular endothelial growth factor (VEGF) expression inhuman cartilage explants,in vivo rat model for OA induced by anterior cruciateligament tear and medical meniscectomy, IL-4 MSC spheroids decreased mechanical allodynia. IL-10 knockout (IL-10\u2212/\u2212) mice develop stronger signs of painsuch as thermal hyperalgesia and mechanical allodynia after chemically inducedOA (monoiodoacetate model [MIA]) compared with control mice. Moreover, dorsalroot ganglia were destructed in MIA-injected IL-10\u2212/\u2212 mice, whilenormal morphology was maintained in MIA-injected control mice. These resultssuggest that IL-10 deficiency exacerbated pain progression. In companion dogs with naturally occurring OA intra-articular injectionwith IL-10, encoding plasmid DNA decreased pain as measured by a visual analogue scale.In an 19 In the rat Groove model,IL4-10 FP had a transient analgesic.In animal models, joint loading is often used as a surrogate for pain, where morejoint loading indicates less joint pain. In the canine Groove model,intra-articular injections with IL4-10 FP led to increased joint loading , which lasted for approximately 1 day.in vivo but likely notthrough VEGF, as VEGF expression in human cartilage explants was unaffected.IL-10 and IL4-10 FP reduced OA-associated pain behaviors invivo. IL4-10 FP reduced the expression of 2 pain mediators (VEGFand NGF) in vitro.In summary, IL-4 has analgesic effects In addition, some studies reported proinflammatory effects, despite IL-4 andIL-10 being anti-inflammatory cytokines. However, the IL4-10 FP showed purelyanti-inflammatory effects, suggesting that by combining both cytokines into onetreatment, the anti-inflammatory effects of one cytokine can counteract the possibleproinflammatory effects of the other, and vice versa. It was shown previously thatby using the IL4-10 FP, indeed the adverse effects of IL-10 are prevented by IL-4. Also, in blood-induced cartilage damage, the combination of IL-4 and IL-10has advantages over IL-10 monotherapy,57 confirming the beneficialeffects of combining both cytokines.This systematic narrative review describes the chondroprotective, anti-inflammatory,and analgesic effects, the three pillars of a successful DMOAD, of theanti-inflammatory cytokines IL-4 and IL-10, and the IL4-10 FP. In general, bothcytokines show promising effects on all 3 outcomes, as did the IL4-10 FP. Regardingchondroprotection, multiple studies describe a lack of effect of the separatecytokines, and 1 study reported negative effects of rIL-10 (increased MMP-13).58 Indeed, the IL4-10 FP was more effective in treating persistentinflammatory hyperalgesia in an in vivo mice model, compared withIL-4 or IL-10, as well as to the combination of both separate cytokines. Moreover, IL4-10 FP inhibited inflammatory mediator-induced neuronalsensitization more effectively than the combination of both separate cytokines. Mechanistically, IL4-10 FP clustered IL-4Ra and IL-10Ra, whereas thecombination of cytokines did not. This unique receptor clustering caused activationof an signaling cascade that strongly differed from that induced by the combinationof cytokines, possibly explaining the superior effects of the IL4-10 FP. Moreover, by combining both cytokines into one molecule, the molecular weightincreases, leading to improved bioavailability. Sialylation of the IL4-10 FPincreased the molecular weight even further and resulted in higher half-lifecompared with the half-life of both cytokines alone.Next to the advantage of counteracting each other\u2019s possible adverse effects, theIL4-10 FP also provides the possibility of synergy between IL-4 and IL-10. Bothcytokines act via a different intracellular pathway, leading to additiveanti-inflammatory effects.3Despite many attempts, leading to promising results in phase II and III trials, there is still no approved DMOAD available. TissueGene-C showed chondroprotective and analgesic effects in a rat OA model,accompanied by increased IL-10 expression, suggestive for anti-inflammatory effects. In humans, intra-articular injection with TissueGene-C led to less structural progression. In addition, in a phase III trial, symptomatic benefits were found. A noninferiority trial comparing diacerein (inhibitor of IL-1\u03b2) and celecoxib(a selective COX-2 inhibitor) showed that diacerein was noninferior to celecoxibregarding symptomatic benefit and showed a good safety profile. A phase III trial evaluating the effects of tocilizumab in hand OA showedsimilar pain relief in the tocilizumab group compared with the placebo group.The Food and Drug Administration (FDA) and European Medicines Agency (EMA) require aDMOAD to slow down joint space narrowing on x-rays and relieve clinicalsymptoms.in vivo mice models, but other pillars of a DMOAD therapy still need to be investigated. A fusionprotein of tumor growth factor-\u03b2 and latency-associated peptide (LAP-MMP-mTGF-\u03b23)showed promising results in a rat OA model. This LAP has also been used for TIMP-3. These fusion proteins are developed to reduce side effects of the activecomponent, rather than combining 2 active components . This also accounts for the HB-NC4 fusion protein, whichhad been developed to overcome the main limitation of NC4 therapy, namely, targeting ability. In contrast, the IL4-10 FP is the only fusion protein designed to combine theeffects of 2 active signaling components.Multiple fusion proteins are developed in the search for a DMOAD as well. TheOSCAR-Fc protein (fusion between osteoclast-associated receptor and the Fc part ofhuman immunoglobulin 1) reduced cartilage damage in 2 and for Vitamin D, a phase IV trial is being conducted in OA. For now, the IL4-10 FP is one of the few compounds which has shown promisingresults on all 3 aspects of ideal DMOAD therapy in in vitro andin vivo animal models, although not in 1 model. Still, a lot ofwork is needed to develop the IL4-10 FP as a DMOAD, suitable for clinical practice.At present, in OA, the IL4-10 FP is envisioned to be used as intra-articulartreatment, to reduce the possibility of systemic side effects,20 and with thatis not usable for OA in smaller joints or in patients with polyarthritis. A majordrawback of direct intra-articular application as applied so far is the rapidclearance out of the joint cavity due to their relatively low molecular weight,leading to only relative short transient effects and the necessity of weeklyinjections. More sustained effects on one injection, or other delivery routes needto become key in future studies to ensure clinical application.The search for a successful DMOAD is continuing, and the Wnt/\u03b2-catenin signalingpathway inhibitor Lorecivivint, the bisphosphonate zoledronic acid, and multipleanti-inflammatory agents successful in the treatment of RA are tested in phase IIItrials in OA patients,20 The IL4-10 FP consists of 2 anti-inflammatory cytokines, and inthat line of thought, it seems reasonable to assume that patients with an importantinflammatory component as underlying mechanism for OA will most likely benefit fromIL4-10 FP treatment.At the same time, the concept of OA being a heterogeneous disease existing ofmultiple phenotypes is getting more and more attention, and future studies areneeded to decide which patients should form the ideal target population for IL4-10FP treatment . The canine and rat Groove modelof OA mimic early post-traumatic OA, and inflammation was too mild to evaluateanti-inflammatory effects.Click here for additional data file.Supplemental material, sj-docx-1-car-10.1177_19476035221098167 for The Role ofInterleukin-4 and Interleukin-10 in Osteoarthritic Joint Disease: A SystematicNarrative Review by E.M. van Helvoort, E. van der Heijden, J.A.G. van Roon, N.Eijkelkamp and S.C. Mastbergen in CARTILAGEClick here for additional data file.Supplemental material, sj-docx-2-car-10.1177_19476035221098167 for The Role ofInterleukin-4 and Interleukin-10 in Osteoarthritic Joint Disease: A SystematicNarrative Review by E.M. van Helvoort, E. van der Heijden, J.A.G. van Roon, N.Eijkelkamp and S.C. Mastbergen in CARTILAGE"} +{"text": "We haveet al. proposed possible problems with 3-stage classification [et al [et al. [First, Woodall fication . Althoug [et al. .We conducted the following analysis to assess the rationality of the 3-stage classification that they were critical of in the letter. The mean (\u00b1 standard deviation) CUSUM value in the 3 phases was 329.77\u2009\u00b1\u2009182.76, 641.49\u2009\u00b1\u200918.81 and 348.91\u2009\u00b1\u2009189.40, respectively (Table\u00a0R2 = 0.420 after adjustment), indicating that only 42.0% of the change in surgical duration could be explained by this model. By contrast, the coefficient of determination (R2 = 0.894) of the learning time curve fitted with the CUSUM analysis was greater and is thus an improvement to the simple linear regression model. In addition, the linear regression model requires the independence of observed objects [Considering their proposition that the average duration of surgery presents a linear trend with decrease in surgical duration, we introduced a simple linear regression model to explore the change in learning time while considering the variables in Table\u00a0 objects . For theRegarding whether stability in the surgical duration was achieved, we can see from the CUSUM value that in the final stage, the duration of surgery began to decrease after the doctors had become fully familiar with the procedure. There was no difference between the duration of the 86th procedure and the mean surgical duration, proving that after 86 procedures, the duration of surgery was stable.Finally, they mentioned that adding any constant (like grouping by 30) to the surgical duration would not affect the CUSUM value. An increase in the constant would only affect the mean value and not the variability. We agree with them in this respect. However, this property of the CUSUM analysis does not affect the results and conclusions of this study."} +{"text": "Consistent with this notion, ACER3 was specifically inhibited by trichostatin A, a strong zinc chelator.Human alkaline ceramidase 3 (ACER3) is one of three alkaline ceramidases (ACERs) that catalyze the conversion of ceramide to sphingosine. ACERs are members of the CREST superfamily of integral-membrane hydrolases. All CREST members conserve a set of three Histidine, one Aspartate, and one Serine residue. Although the structure of ACER3 was recently reported, catalytic roles for these residues have not been biochemically tested. Here, we use ACER3 as a prototype enzyme to gain insight into this unique class of enzymes. Recombinant ACER3 was expressed in yeast mutant cells that lack endogenous ceramidase activity, and microsomes were used for biochemical characterization. Six-point mutants of the conserved CREST motif were developed that form a Zn-binding active site based on a recent crystal structure of human ACER3. Five point mutants completely lost their activity, with the exception of S77A, which showed a 600-fold decrease compared with the wild-type enzyme. The activity of S77C mutant was pH sensitive, with neutral pH partially recovering ACER3 activity. This suggested a role for S77 in stabilizing the oxyanion of the transition state. Together, these data indicate that ACER3 is a Zn Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1:\u00a0YesReviewer #2:\u00a0Yes**********\u00a0 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1:\u00a0YesReviewer #2:\u00a0Yes**********\u00a0 3. Have the authors made all data underlying the findings in their manuscript fully available?PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data\u2014e.g. participant privacy or use of data from a third party\u2014those must be specified. The Reviewer #1:\u00a0NoReviewer #2:\u00a0Yes**********\u00a0 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.Reviewer #1:\u00a0YesReviewer #2:\u00a0Yes**********\u00a0 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. Reviewer #1:\u00a0This is a very nice article in which the mechanism of hydrolysis of ceramides by ACER3 is elucidated. Importantly, a similar mechanism does like operate for other members of the CREST superfamily of integral-membrane hydrolases. The authors use microsomes from yeast mutant cells without endogenous ceramidase activity to show the effect of ACER3 mutants of the enzyme activity. Interestingly, the finding that trichostatin A inhibits ACER3 strongly suggest that medicinal chemistry efforts on trichostatin A structure may render derivatives highly selective for ACER3 over other Zn+2-dependent enzymes.The article is essentially publishable as it is, but I have an important comment: In Fig S3, NBD-C12-PHC does not seem to be pure, which is important in the context of the article. Why are they so many peaks in the HPLC chromatogram? With such an impure substrate, kinetic analyses afford wrong numbers.Minor comments.1. In the Abstract, I would say \u201cConsistent with this mechanism, ACER3 was specifically inhibited by the HDAC inhibitor trichostatin A, a strong zinc chelator.2. In the methods section, replace extraction buffer by extraction solvent, since a mixture of chloroform/methanol is not a buffer.3. In the discussion, line 3: \u201cThe activated water molecule undergoes a nucleophilic attack on the ceramide amide bond, resulting in an oxyanion bound to a tetrahedral carbon\u201d.4. Throughout the article, replace hydroxymate by hydroxamate5. Figure 4. Panel A. It would have been nicer to run a dose response curve and calculate the IC50 for trichostatin A, but, as this would not change the conclusions of the article, it is not strictly necessary to run this experiment. Please do statistical analysis of the TCA bars.6. Figure 4. Panel B. Two more TCA concentrations (i.e. 30 and >60 muM) would have been nice to see how Km and Vmax change with the inhibitor concentration and further support the type of inhibition. Again, this is not strictly necessary, but it would make a better paper.7. Fig S1C: More points between 10 and 100 pM C12-FA are necessary to have a solid calibration line.Reviewer #2:\u00a0In this study, the author investigated the catalytic mechanism of human alkaline ceramidase 3 (ACER3) by focusing three histidine, one aspartate, and one serine residue, which are suggested to form a metal-dependent active site for lipid hydrolysis and conserved in the CREST superfamily. As a result, Ala mutation of the His and Asp residues completely abolished the activity of ACER3, and mutation of Ser to Ala or Cys showed a significant decrease in activity. The authors suggested that the Ser residue is involved in stabilization of amide bond of ceramide to facilitate the catalytic reaction of ACER3. Furthermore, it was shown that trichostatin A (TSA), an inhibitor of HDAC class I / II, inhibits the activity of ACER3.Major points1) In this study, the authors newly established HPLC assay for detection of ceramidase activity. The author described \u201cThis newly established method was extremely sensitive and could quantitate NBD-FA levels at far lower levels than conventional TLC-based method. (page 5 line 35-37)\u201d. However, considering that the reader will use this method in the future, a diagram should be presented that specifically shows how sensitive the detection of ceramidase activity by the HPLC method is compared to TLC.2) The authors evaluated the activity of ceramidase only by hydrolysis reaction of ceramide. Alkaline ceramidase also catalyzes the reverse hydrolysis reaction, thus, it would be better if the authors examine the effect of point mutations on reverse hydrolysis reaction activity.3) The author described \u201cThis is indicative of an uncompetitive mechanism of inhibition .\u201d; however, Figure 4B requires a Lineweaver\u2013Burk plot or similar plot with multiple concentrations of inhibitor. In addition, IC50 value of TSA is also required.**********\u00a0 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.If you choose \u201cno\u201d, your identity will remain anonymous but your review may still be made public.Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1:\u00a0NoReviewer #2:\u00a0Nohttps://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at\u00a0figures@plos.org. Please note that Supporting Information files do not need this step. While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool,\u00a0 28 Jun 2022Reviewer#1:1. In the Abstract, I would say \u201cConsistent with this mechanism, ACER3 was specifically inhibited by the HDAC inhibitor trichostatin A, a strong zinc chelator.- It has been revised as suggested.2. In the methods section, replace extraction buffer by extraction solvent, since a mixture of chloroform/methanol is not a buffer.- It has been replaced as suggested.3. In the discussion, line 3: \u201cThe activated water molecule undergoes a nucleophilic attack on the ceramide amide bond, resulting in an oxyanion bound to a tetrahedral carbon\u201d.- We have revised the sentence.4. Throughout the article, replace hydroxymate by hydroxamate- They have been replaced.5. Figure 4. Panel A. It would have been nicer to run a dose response curve and calculate the IC50 for trichostatin A, but, as this would not change the conclusions of the article, it is not strictly necessary to run this experiment. Please do statistical analysis of the TSA bars. Two more TSA concentrations (i.e. 30 and >60 muM) would have been nice to see how Km and Vmax change with the inhibitor concentration and further support the type of inhibition. Again, this is not strictly necessary, but it would make a better paper.- We have performed ACER3 activity assays in the presence of different concentrations of TSA and determined the effects of TSA on the Km and Vmax of ACER3 and the IC50 value of TSA.6. Fig S1C: More points between 10 and 100 pM C12-FA are necessary to have a solid calibration line.- Two more points (20 pM and 50 pM) have been added and the plot was recalculated.7. In Fig S3, NBD-C12-PHC does not seem to be pure, which is important in the context of the article. Why are they so many peaks in the HPLC chromatogram? With such an impure substrate, kinetic analyses afford wrong numbers.- NBD-C12-PHC was pure. The peaks may represent the other products resulting from the action of other enzymes as microsomes not purified ACER3 protein were used for enzymatic reactions. Reviewer#2:1. In this study, the authors newly established HPLC assay for detection of ceramidase activity. The author described \u201cThis newly established method was extremely sensitive and could quantitate NBD-FA levels at far lower levels than conventional TLC-based method. (page 5 line 35-37)\u201d. However, considering that the reader will use this method in the future, a diagram should be presented that specifically shows how sensitive the detection of ceramidase activity by the HPLC method is compared to TLC.- We agree that we need to be more careful when describing the newly established method. From the supplementary data , we confirmed that the HPLC assay can measure fM of C12-FA (1.6 LU). This can\u2019t be achieved by most TLC-based methods including ours. The previous review paper had nicely introduced the pros and cons of the TLC-based method. Based on the paper, the limitation of fatty acid detection by TLC is around 3 ng-100 ng. However, our setting can detect up to 0.037ng of NBD-C12-FA. We added another supplementary figure to compare the TLC and HPLC-based methods in terms of sensitivity and quantification. 2. The authors evaluated the activity of ceramidase only by hydrolysis reaction of ceramide. Alkaline ceramidase also catalyzes the reverse hydrolysis reaction, thus, it would be better if the authors examine the effect of point mutations on reverse hydrolysis reaction activity.- Our unpublished data indicate that distinct from the yeast alkaline ceramidases, mammalian alkaline ceramidases, including ACER3, do not catalyze the reverse reaction of ceramidase. As such, we can not evaluate the effect of point mutations on the reverse reaction by ACER3.3. The author described \u201cThis is indicative of an uncompetitive mechanism of inhibition .\u201d; however, Figure 4B requires a Lineweaver\u2013Burk plot or similar plot with multiple concentrations of inhibitor. In addition, IC50 value of TSA is also required.- We have performed ACER3 activity assays in the presence of different concentrations of TSA, which allowed us to include the Lineweaver-Burk plot and IC50 value of TSA and determined its inhibition type (mixed inhibition) via the Lineweaver-Burk plot. Additional edits:1. We revised the format of the manuscript according to the instruction you provided.2. Funding information has been removed from the manuscript.3. Our funding statement is as follows;National Institutes of Health Grants:R01GM130878 (to C.M and L.M.O)P01CA097132 (to Y.A.H and C.M)R01GM062887 (to L.M.O)https://www.nih.gov/grants-fundingURL of National Institutes of Health Grants: C.M.: decision to publish, preparation of the manuscript and supervisionY.A.H: supervision and project administrationL.M.O: supervision and project administration4. We confirmed ORCID for all authors.5. We included captions for all supplementary figures.6. We have added additional references and checked them as suggested.AttachmentResponse to reviewers-Revision-JK.docxSubmitted filename: Click here for additional data file. 4 Jul 20222+-dependent amidasesAlkaline ceramidase catalyzes the hydrolysis of ceramides via a catalytic mechanism shared by ZnPONE-D-22-03848R1Dear Dr. Mao,We\u2019re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.Within one week, you\u2019ll receive an e-mail detailing the required amendments. When these have been addressed, you\u2019ll receive a formal acceptance letter and your manuscript will be scheduled for publication.http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at onepress@plos.org.If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they\u2019ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact Kind regards,Israel SilmanAcademic EditorPLOS ONEAdditional Editor Comments :Reviewers' comments: 23 Aug 2022PONE-D-22-03848R1 2+-dependent amidases Alkaline ceramidase catalyzes the hydrolysis of ceramides via a catalytic mechanism shared by ZnDear Dr. Mao:I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. onepress@plos.org.If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact plosone@plos.org. If we can help with anything else, please email us at Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staffon behalf ofProf. Israel Silman Academic EditorPLOS ONE"} +{"text": "Due to a ship\u2019s extreme motion, there is a risk of injuries and accidents as people may become unbalanced and be injured or fall from the ship. Thus, individuals must adjust their movements when walking in an unstable environment to avoid falling or losing balance. A person\u2019s ability to control their center of mass (COM) during lateral motion is critical to maintaining balance when walking. Dynamic balancing is also crucial to maintain stability while walking. The margin of stability (MOS) is used to define this dynamic balancing. This study aimed to develop a model for predicting balance control and stability in walking on ships by estimating the peak COM excursion and MOS variability using accelerometers. We recruited 30 healthy individuals for this study. During the experiment, participants walked for two minutes at self-selected speeds, and we used a computer-assisted rehabilitation environment (CAREN) system to simulate the roll motion. The proposed prediction models in this study successfully predicted the peak COM excursion and MOS variability. This study may be used to protect and save seafarers or passengers by assessing the risk of balance loss. Recent advances in wearable sensors have enabled gait analysis outside the laboratory. Continuous gait monitoring during free-living activities presents a promising approach to the gait study, investigating the risk of falling in real-world settings. Individual walking characteristics differ from one individual to another, and walking strategies can change depending on the walking environment . WalkingThe human body is less lateral stable when walking ,7,8,9,10Due to ship motion, individuals are subjected to constant perturbations while walking on ships. Since the ship\u2019s length is generally longer than its width, the ship\u2019s movement is usually greater in the roll than in the pitch . For thiThe purpose of this study was to construct a model for predicting balance control and stability in walking on ships by estimating the peak COM excursion and MOS variability. We used the CAREN system during experiments to simulate the roll motion and quantified the peak COM excursion and MOS variability. This study can be used to protect and save seafarers or passengers by determining the risk of falling overboard.A total of 30 healthy individuals were recruited for this study. The demographics of the participants are shown in We used a 3D motion capture system with 10 cameras to record the subjects\u2019 movement at 100 Hz for gold standard data. Thirty-seven reflective markers were attached to anatomical landmarks based on the Plug-in Gait full-body model : four maParticipants were asked to walk for two minutes at a self-selected walking speed using the CAREN system with a split-belt treadmill. The simulated roll was tested bilaterally while participants were walking on the CAREN. There were five different conditions: no rolling (NR), 5-, 10-, 15-, and 20-degrees of rolling . Participants performed once for each condition. A safety harness was worn by all participants to avoid accidental falls on the moving platform. For the step event detection and feature extraction methods, the same methods as in our previous works were used ,25. We uFeature selection is a key part of developing predictive models . The feaiy and ijx are the respective outcome and predictors of the ith subject; \u03bb is a non-negative tuning parameter; and \u03b2 is a vector of regression coefficients that needs to be estimated.The least absolute shrinkage and selection operator (LASSO) minimizes the residual sum of squares of a vector of regression coefficients subject to a constraint on the L1-norm of the vector . This teiy and ith subject; \u03bb is a non-negative tuning parameter; L1-norm and L2-norm, respectively:Elastic Net, a combination of ridge regression and LASSO, was proposed in 2005 . When map-value individually and rank the features using the p-values from the F-tests. The F-test is a statistical procedure used when testing the hypothesis that responses were drawn from populations that have the same mean when comparing it with the alternative hypothesis that the means may not be the same in all populations [p-value of the test statistic is small, the corresponding predictor is significant.F-tests are used in the feature selection method to test each predictor\u2019s ulations ,30. If tThe neighborhood component analysis (NCA) proposed by Yang et al. is a nonThe original ReliefF algorithm estimateFor fitting the predictive model, we used a linear regression model and a ridge regression depending on the presence of multicollinearity. If there was multicollinearity among the features selected by each feature selection method, the ridge regression model was used; otherwise, we used the linear regression model. The variance inflation factor (VIF) was used to determine the existence of multicollinearity . A lineaith subject for each response variable.To compare the predictive accuracy for our best models constructed by using the different feature selection methods, the mean absolute error (MAE) as a performance measure was calculated for the test data for each model:The performance of our model was evaluated using the following criteria. First, we split the whole dataset into a ratio of 7 to 3 for training and testing datasets, respectively. The regression coefficients were determined by the training set. These coefficients were then used to predict the COM excursion and MOS variability for the testing set. This process was repeated 100 times using a random selection of training and testing datasets for each iteration. In all comparisons, each model for the different selection methods was executed using the same set of random selections, ensuring that the validation dataset was the same across models.t-test was used to determine the mean difference between the actual values and the predicted values for peak COM excursion and MOS variability. We assumed that if there was no significant difference between the actual and predicted values, the prediction results were reliable. In addition to the p-value approach, we also examined meaningful change in the peak COM excursion and MOS variability so we could compare our prediction results to the actual values using an effect size. Effect size quantifies a difference between two means based on distribution so that the results of different measures can be compared. The effect size is calculated using Cohen\u2019s d, which is defined as [u1 and u2, respectively, are the means of actual values and predicted values and p < 0.05.A paired fined as :(6)d= while there were significant differences between the actual and predicted values for the MOS variability (p = 0.0318). For determining the practical significance, we also computed the effect size using Cohen\u2019s d. The effect sizes for the peak COM excursion and MOS variability were 0.0053 and 0.0111, respectively.To validate our prediction model, we performed a paired p = 0.0527), which means our prediction result for the peak COM excursion was reliable. On the other hand, there was a statistically significant difference in MOS variability (p = 0.0318) at the 95% significance level, but we can say that there was no difference at the 90% significance level. In addition, we used an effect size to determine the practical significance of our research results. The effect size indicates the importance of the difference between groups. Statistical significance using the p-value can be deceptive as it is affected by the large sample size [This study demonstrates that wearable sensors can be used to predict gait stability on a ship in simulated sea conditions. Utilizing the best feature selection method and linear regression models, we developed prediction models for peak COM excursion and MOS variability. Intuitively, the prediction errors were minor, and the adjusted r-squared values of the prediction models for the peak COM excursion and MOS variability look reliable at 0.6789 and 0.7043, respectively . We emplple size . The effFurthermore, the study exhibited the best feature selection method for predicting the peak COM excursion and MOS variability. The results of our research indicated that the LASSO gave the best prediction results with the smallest MAE . The besThere are several limitations to this study. First, the participants are relatively young and healthy individuals and have little experience onboard a ship. Therefore, it is unreasonable to generalize our results to experienced sailors and middle-aged and older cruise ships\u2019 main customers. Nevertheless, our findings are sufficient to predict the walking stability of young and inexperienced trainees or new crew members because they are more likely to lose balance with ship movements than experienced crew members. Second, only the ship\u2019s rolling motion was applied in the experiment. The actual movement of the ship in the sea involves six degrees of freedom, including rolling, pitching, etc. In addition, the actual ship has a rolling motion of more than 20 degrees in bad weather, but only 20 degrees of rolling were tested in our experiment since the CAREN system only supports up to 20 degrees. However, this was the first study to predict walking stability in a sea environment to the best of our knowledge. Therefore, further research is needed for verification by applying our method to ships in real-world sea environments. Lastly, the predictions of COM excursion and MOS variability may be affected by individual differences, such as age, height, weight, BMI, or their balance control ability. In the experimental design of future studies, therefore, these human factors should be taken into account in order to examine individuals\u2019 differences.This study investigated whether typical dynamic stability measures, peak COM excursion, and MOS variability could be predicted in healthy individuals walking in sea environments using wearable sensors. The proposed prediction models in this study successfully predicted the peak COM excursion and MOS variability. We also assessed three feature selection methods for predicting gait stability on a ship at sea by estimating the peak COM excursion and MOS variability. The LASSO resulted in the lowest prediction errors. Our findings can be used to assess the risk of balance loss. Further studies should investigate the validity of these findings when the methods are applied to a real sea environment to prevent falling overboard by detecting the risk of falls."} +{"text": "Hand gesture recognition systems (HGR) based on electromyography signals (EMGs) and inertial measurement unit signals (IMUs) have been studied for different applications in recent years. Most commonly, cutting-edge HGR methods are based on supervised machine learning methods. However, the potential benefits of reinforcement learning (RL) techniques have shown that these techniques could be a viable option for classifying EMGs. Methods based on RL have several advantages such as promising classification performance and online learning from experience. In this work, we developed an HGR system made up of the following stages: pre-processing, feature extraction, classification, and post-processing. For the classification stage, we built an RL-based agent capable of learning to classify and recognize eleven hand gestures\u2014five static and six dynamic\u2014using a deep Q-network (DQN) algorithm based on EMG and IMU information. The proposed system uses a feed-forward artificial neural network (ANN) for the representation of the agent policy. We carried out the same experiments with two different types of sensors to compare their performance, which are the Myo armband sensor and the G-force sensor. We performed experiments using training, validation, and test set distributions, and the results were evaluated for user-specific HGR models. The final accuracy results demonstrated that the best model was able to reach up to In recent years, the use of non-verbal communication techniques has proven useful for creating human\u2013machine interfaces (HMIs). In particular, hand gesture recognition (HGR) systems have been used in applications such as sign language recognition, human\u2013machine interfaces, muscle rehabilitation systems, prosthesis design, robotic applications, and augmented reality, among others ,3,4,5,6.Several HGR systems use vision-based methods, for example, Kinect and LeapEMG signals can be modeled as a stochastic process that depends on whether the muscle contraction is static or dynamic. However, to address these problems, machine learning (ML) and deep learning (DL) techniques have been commonly used to classify and recognize EMG signals instead of mathematical models since the latter have high design complexity and performance issues ,14. In pThere have been a few attempts to use RL techniques for HGR and arm movement or hand gesture characterization using sensor-based systems. For example, in , the autWe use our large dataset composed of 85 users with information on 11 different hand gestures (5 static and 6 dynamic gestures) that contain EMG and IMU signals. The data were taken from two different armband sensors, the Myo armband and G-force sensors.We successfully combine the EMG-IMU signals with the deep Q-network (DQN) reinforcement learning algorithm. We propose an agent\u2019s policy representations based on artificial neural networks (ANN).We compare the results of the proposed method using both sensors, the Myo armband and G-force sensors. We also compare the results found in the present work, which uses EMG and IMU signals, with those of a method previously developed on a dataset that used only EMG signals and the Q-learning algorithm.Considering the literature review presented above, the main contributions of the present work are listed below:The rest of this work is organized as follows. In In this section, we present the proposed method for the HGR system based on EMG-IMU signals and RL . As can https://laboratorio-ia.epn.edu.ec/en/resources/dataset/emg-imu-epn-100 accessed on 18 November 2022.In this work, we use EMG-IMU data of 12 different hand gesture categories\u201411 different hand gestures and 1 relax gesture\u2014in which 5 of them are static gestures\u2014wave in, wave out, fist, open, and pinch\u2014and the other 6 are dynamic gestures\u2014up, down, left, right, forward, and backward. The data were collected using the Myo armband\u2014a sensor with 8 channels at a sampling rate of 200 Hz\u2014and the G-force armband\u2014a sensor with 8 channels at a sampling rate of 1 kHz. The proposed dataset consists of 85\u00a0users, of whom 43 are used for training and validation to find the best possible hyperparameter configurations. From this group, 16 users are from the Myo armband sensor data and 27 from the G-force sensor data. On the other hand, 42 users are used for testing to evaluate overfitting and to calculate the final results. From this group, 16 users are from the Myo armband sensor data and 26 from the G-force sensor data. The data of each user in the training set is composed of 180 hand gesture repetitions\u201415 repetitions for each gesture\u2014and the other 180 samples are for validation. This division of samples is similar to the test set. We summarize the dataset distribution for both the training and testing sets in The preprocessing of each EMG sample consisted of using a sliding window on each sample to analyze it separately ,14. In tFeature extraction methods are used to extract relevant and non-redundant features from EMGs and IMUs. For this purpose, different domains can be used such as time, frequency, or time-frequency domains. In this work, five different features were extracted in the time domain over each step of the sliding window. The feature extraction functions used were root mean square (RMS), standard deviation (SD), energy (E), mean absolute value (MAV), and absolute envelope (AE), which are typically used to extract features of EMGs ,14. We uThe objective of this stage is to identify the category of a hand gesture using an EMG-IMU signal among a set of categories with which the proposed algorithm was previously trained. In this work, we used an RL algorithm called deep Q-network (DQN), which is made up of a neural network to represent the agent\u2019s policy. In this section, we explain in detail the EMG-IMU signal sequential classification problem that can be modeled as a partially observable finite Markov decision process (POMDP).a taken in the initial state s can be defined asn represents the number of states. The variable We can define the sliding window classification on an EMG-IMU signal sample during the development of a hand gesture as a sequential decision-making problem. In this problem, the actions correspond to the labels of the hand gestures to be inferred, whereas the states are the feature vectors corresponding to the observations of each window of an EMG-IMU sample. In this context, we can learn to estimate the optimal action for each state. For this purpose, we maximized the expected sum of future rewards by performing that action in the given states and then following an optimal policy . Thus, crding to . Typicalrding to . For anyfined asQ\u03c0=Erding to . However vectors . For thi vectors .The Q-Learning algorithm uses Q-values to iteratively improve the behavior of the learning agent. The Q-values are an estimation of the performance of a certain action networks . In the rvations . For a gHere, In this work, we use a deep Q-network (DQN) agent representation, which is composed of an artificial neural network (ANN) as a function approximation method to learn a parameterized value function. Thus, for a given observation network ,26,27. Trding to , there aEquation , which he target .(4)YtDQt is saved in a pool of stored data sample transitions Algorithm 1 DQN with Experience ReplayInitialize action-value function Q with random weightsNInitialize replay memory for episode = 1, M do\u00a0\u00a0\u00a0\u00a0Initialize agent in observation for t = 1, T do\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0With probability \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0otherwise, select \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0store transition \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0Sample random mini-batch of transitions \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0Perform gradient descent to update \u00a0end for\u00a0\u00a0\u00a0\u00a0end forThe second important consideration is the use of experience replay, which randomly samples the data to remove correlations in the sequences of observations, which accelerates the training of the agent. For this purpose, the tuple quations , with thtions that we use in this work uses DQN the algorithm to learn an optimal policy, which allows an agent to learn to classify and recognize hand gestures from EMG-IMU signals. A figure that represents the interaction between the DQN agent representation and the proposed environment for the EMG-IMU classification is illustrated in Agent: The agent is made up of the DQN algorithm and an artificial neural network ANN as the policy representation. During training, the agent learns a policy that maximizes the total sum of rewards using the DQN algorithm. The inputs of the neural network are the features extracted from each window of the EMG-IMU signals (observations), and as its output, the network returns the values of the predicted gestures (actions). In this way, the agent learns to classify window observations from EMG-IMU signals. Each EMG-IMU signal sample is considered an independent episode, and each sliding window step is considered an observation during that episode.Observation: The observation Action: An action Environment: The environment is the defined environment within which the agent performs an action to move from one observation to the next, which returns a reward. In this case, we define the environment from the sliding window information\u2014feature vectors and labels\u2014extracted from each EMG-IMU signal and the ground truth (vector of known labels) of the EMG-IMU signal.Reward: The agent receives a positive or negative reward depending on whether during its interaction with the environment it was able to correctly predict a gesture for a given observation. We define two different rewards, one for ranking and one for recognition. An illustration of the rewards that the agent obtains is presented in Once an EMG-IMU sample is processed and the vector of the predicted labels is obtained, we use post-processing to remove false labels and improve the accuracy of the proposed HGR system. There are several ways to perform post-processing such as using filters, majority voting, and heuristics, among others ,16. In tIn this section, we present the validation and testing results for the proposed HGR user-specific method for both the Myo armband and G-force sensors with regard to static and dynamic gestures. First, to find the best possible hyperparameters, we perform a validation procedure, and the best model results found during the validation are presented. Then, we present the final testing results with the previously found best hyperparameters. The validation and testing results for the Myo armband and G-force sensors are analyzed to compare their performance, considering separately static and dynamic gestures. Finally, we briefly compare the proposed method using the EMG-IMU signals with a similar method that uses only EMG.For the validation results, we trained and tested different user-specific models based on an agent that uses neural networks as policy representations with the DQN algorithm that we presented previously in A training sample illustration of the average reward versus the number of episodes is illustrated in We present the classification and recognition results per user for the Myo armband sensor for static and dynamic gestures in To present the testing results, we performed experiments on the test set based on the best hyperparameters previously found during the validation procedure presented in We also present the confusion matrices that represent the classification results on the test set of the Myo armband sensor for static gestures in We implemented two additional tests for our proposed dataset and method, but the classification stage was based on supervised learning methods such as k-nearest neighbor (KNN) and a convolutional neural network (CNN). We also compared the results found in the present work, which uses EMG and IMU signals, with methods previously developed using the same sensor, with a similar dataset distribution with similar method stages that work with supervised and reinforcement learning ,25. Thesectively . The appAccording to the test results, the best classification accuracies were obtained for static gestures using the Myo armband sensor and were We compared the proposed method that used EMG and IMU signals with respect to other similar works where the same sensor was used with only EMG signals for static gestures. We obtained accuracies of In general, the difference between the results of the validation and testing with regard to the classification and recognition was less than 5%. This difference is small so it can be said that the proposed method is robust and does not suffer from the effects of overfitting for the proposed dataset distribution.The processing time of each window observation was, on average, 33 ms for both sensors. Since this is less than 300 ms, we can consider that both models work in real time for the proposed application.Although the proposed results are encouraging, it is important to mention that in future works we will focus on the convenience and comfort that users experience when using static or dynamic gestures. User preference data can impact the development of HGR architectures so we will study this in depth in future work.In this work, we proposed an HGR system based on the DQN algorithm for the classification of 11 different hand gestures including static and dynamic gestures. We tested and compared the results of two different sensors, the Myo armband and G-force sensors, from which we used the EMG and IMU signals to obtain the feature vectors. The proposed models were validated on 43 users and tested on 42 different users. The best classification accuracy was obtained for the Myo armband sensor, reaching up to"} +{"text": "Anthocharis cardamines . The genome sequence is 360 megabases in span. The majority (99.74%) of the assembly is scaffolded into 31 chromosomal pseudomolecules, with the W and Z sex chromosomes assembled. Gene annotation of this assembly on Ensembl has identified 12,477 protein coding genes.We present a genome assembly from an individual female Anthocharis;Anthocharis cardamines (NCBI:txid227532).Eukaryota; Metazoa; Ecdysozoa; Arthropoda; Hexapoda; Insecta; Pterygota; Neoptera; Endopterygota; Lepidoptera; Glossata; Ditrysia; Papilionoidea; Pieridae; Pierinae;Anthocharis cardamines) is a member of the Anthocharidini, a tribe within the Pierinae (britannica (mainland Britain) andhibernica (Ireland and Isle of Man). The English population exhibits a reduced number of chromosomes (n = 30) compared to specimens from continental Europe (n = 31), implying a fusion event since the separation of England from the continent ~7,000 years ago . The speBritain , inhabitBritain .A. cardamines . The resulting annotation includes 28,207 transcribed mRNAs from 12,477 protein-coding and 4,279 non-coding genes. There are 1.82 coding transcripts per gene and 8.41 exons per transcript.The ilAntCard3.1 genome has been annotated using the Ensembl rapid annotation pipeline and a single maleA. cardamines specimen were collected from Carrifran Wildwood, Scotland using a net by Sam Ebdon, Gertjan Bisshop and Konrad Lohse . The samples were identified by Konrad Lohse and were snap-frozen at -80\u00b0C.A single femaleDNA was extracted at the Scientific Operations Core, Wellcome Sanger Institute. The ilAntCard3 sample was weighed and dissected on dry ice. Abdomen tissue was disrupted by manual grinding with a disposable pestle. Fragment size analysis of 0.01\u20130.5 ng of DNA was then performed using an Agilent FemtoPulse. High molecular weight (HMW) DNA was extracted using the Qiagen MagAttract HMW DNA extraction kit. Low molecular weight DNA was removed from a 200-ng aliquot of extracted DNA using 0.8X AMpure XP purification kit prior to 10X Chromium sequencing; a minimum of 50 ng DNA was submitted for 10X sequencing. HMW DNA was sheared into an average fragment size between 12\u201320 kb in a Megaruptor 3 system with speed setting 30. Sheared DNA was purified by solid-phase reversible immobilisation using AMPure PB beads with a 1.8X ratio of beads to sample to remove the shorter fragments and concentrate the DNA sample. The concentration of the sheared and purified DNA was assessed using a Nanodrop spectrophotometer and Qubit Fluorometer and Qubit dsDNA High Sensitivity Assay kit. Fragment size distribution was evaluated by running the sample on the FemtoPulse system.Pacific Biosciences HiFi circular consensus and 10X Genomics read cloud DNA sequencing libraries were constructed according to the manufacturers\u2019 instructions. Sequencing was performed by the Scientific Operations core at the WSI on Pacific Biosciences SEQUEL II (HiFi) and Illumina HiSeq X (10X) instruments. Hi-C data were also generated from whole organism tissue of ilAntCard2 using the Qiagen Hi-C kit and sequenced on an Illumina HiSeq X (10X) instrument.Pretext. The mitochondrial genome was assembled using MitoHiFi (Assembly was carried out with HiCanu . The assAnthocharis cardamines assembly (GCA_905404175.1). Annotation was created primarily through alignment of transcriptomic data to the genome, with gap filling via protein-to-genome alignments of a select set of proteins from UniProt \" reported the first genome assembly for the orange-tip butterfly. The methods for assembly, scaffolding, and annotation are mostly well described and of a high standard. Also, the quality of the assembly is really high. I have only two concerns about the manuscript. First, the method for genome polishing with 10x data was not well explained. It is described up to the variant calling phase, but the actual way to polish the genome should be stated. Second, the annotation quality was not confirmed. Running a BUSCO on the annotation might be a help to see how well the genome was annotated.britannica but stating this clearly in the \u201cMethods\u201d section would help the readers to get important information about the actual samples used for this study. I would also like to ask the authors to add information about which subspecies are used for this study. According to the \u201cBackground\u201d section, it seems it\u2019sAre sufficient details of methods and materials provided to allow replication by others?PartlyIs the rationale for creating the dataset(s) clearly described?YesAre the datasets clearly presented in a useable and accessible format?YesAre the protocols appropriate and is the work technically sound?YesReviewer Expertise:Insect genome assembly and annotation. Evolutionary Ecology. Butterfly genomics.I confirm that I have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard. Anthocharis cardamines \" the authors Ebdonet al. report the genome assembly and annotation of an individual orange tip butterfly from Scotland.In the article \"The genome sequence of the orange-tip butterfly, This effort appears to be an addition to the Darwin Tree of Life project and so it's sole focus on the generation of a high quality genome assembly is understandable. The methods used are generally considered the best available at present, and the resulting assembly and annotation conform to expectations. A brief exploration of the raw data revealed no anomalies to this reviewer.Are sufficient details of methods and materials provided to allow replication by others?YesIs the rationale for creating the dataset(s) clearly described?YesAre the datasets clearly presented in a useable and accessible format?YesAre the protocols appropriate and is the work technically sound?YesReviewer Expertise:Genome assembly and annotation. Butterfly, fish, and bird genomics. Evolutionary genomics.I confirm that I have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard."} +{"text": "Synanthedon vespiformis . The genome sequence is 287 megabases in span. Of the assembly, 100% is scaffolded into 31 chromosomal pseudomolecules with the Z sex chromosome assembled. The complete mitochondrial genome was also assembled and is 17.3 kilobases in length.We present a genome assembly from an individual male Synanthedon;Synanthedon vespiformis (NCBI:txid1660703).Eukaryota; Metazoa; Ecdysozoa; Arthropoda; Hexapoda; Insecta; Pterygota; Neoptera; Endopterygota; Lepidoptera; Glossata; Ditrysia; Sesioidea; Sesiidae; Sesiinae; Synanthedonini;Synanthedon vespiformis , is a day flying, clearwing moth belonging to the family Sesiidae. Adults exhibit wasp mimicry, as with many others in the Sesiidae family. It is widespread in the Palearctic, including England and eastern Wales, but its range does not extend to the north of the British Isles or Ireland . Loci could be evaluated for known traits, such as wasp mimicry.dSalix . The lar Turkey .S. vespS. vespiformis collected from Wytham Woods, Berkshire, UK was collected using a light trap from Wytham Woods, Berkshire, UK by Douglas Boyes (University of Oxford). The specimen was identified by Douglas Boyes and snap-frozen on dry ice. A single maleDNA was extracted at the Tree of Life laboratory, Wellcome Sanger Institute. The ilSynVesp1 sample was weighed and dissected on dry ice with tissue set aside for Hi-C sequencing. Abdomen tissue was disrupted using a Nippi Powermasher fitted with a BioMasher pestle. Fragment size analysis of 0.01\u20130.5 ng of DNA was then performed using an Agilent FemtoPulse. High molecular weight (HMW) DNA was extracted using the Qiagen MagAttract HMW DNA extraction kit. Low molecular weight DNA was removed from a 200-ng aliquot of extracted DNA using 0.8X AMpure XP purification kit prior to 10X Chromium sequencing; a minimum of 50 ng DNA was submitted for 10X sequencing. HMW DNA was sheared into an average fragment size between 12\u201320 kb in a Megaruptor 3 system with speed setting 30. Sheared DNA was purified by solid-phase reversible immobilisation using AMPure PB beads with a 1.8X ratio of beads to sample to remove the shorter fragments and concentrate the DNA sample. The concentration of the sheared and purified DNA was assessed using a Nanodrop spectrophotometer and Qubit Fluorometer and Qubit dsDNA High Sensitivity Assay kit. Fragment size distribution was evaluated by running the sample on the FemtoPulse system.Pacific Biosciences HiFi circular consensus and 10X Genomics Chromium read cloud sequencing libraries were constructed according to the manufacturers\u2019 instructions. Sequencing was performed by the Scientific Operations core at the Wellcome Sanger Institute on Pacific Biosciences SEQUEL II (HiFi) and Illumina NovaSeq 6000 (10X) instruments. Hi-C data were generated in the Tree of Life laboratory from head and thorax tissue of ilSynVesp1 using the Arima v2 kit and sequenced on a NovaSeq 6000 instrument.Pretext. The mitochondrial genome was assembled using MitoHiFi , and in some circumstances other Darwin Tree of Life collaborators.The materials that have contributed to this genome note have been supplied by a Darwin Tree of Life Partner. The submission of materials by a Darwin Tree of Life Partner is subject to the Synanthedon vespiformis.\u00a0 Presented features support that the genome reference is relatively complete in terms of gene content.\u00a0 The manuscript was written clearly. I have minor concerns for the authors to improve the manuscript.The method for genome assembly was not fully described. Detailed commands and parameters should be provided. More importantly, how the genome assembly was manually corrected should be described rather than linking to a previous manuscript.\u00a0It's unclear to me how Chr.Z was assigned. Based on the sequencing ratio between male and female, or the cross-species synteny? Related methods and evidence should be provided. I note that the reported genome reference has not been annotated. A fully-annotated reference genome would be much more helpful for the community.The authors present a chromosomal-level genome assembly for the yellow-legged clearwing\u00a0Are sufficient details of methods and materials provided to allow replication by others?PartlyIs the rationale for creating the dataset(s) clearly described?YesAre the datasets clearly presented in a useable and accessible format?YesAre the protocols appropriate and is the work technically sound?YesReviewer Expertise:genomicsI confirm that I have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard. synanthedon vespiformis \u00a0begins with an introduction describing the natural history of the species, followed by technical aspects of genome sequencing performed to obtain high-quality genome chromosome scale assembly as in other data notes from the Darwinian tree of life project. The introduction is well written and will be of interest for a broad audience, whereas the detailed protocol and figures will be understood by specialists in genomics. The genome obtained using a combination of the best available approaches is of high quality and at chromosomal scale. The values assessing quality are all very good. This genome will provide a strong reference for future studies provided that some effort is put on the study of the genetic of mimicry that makes this and related lepidopteran species look like wasps which is quite fascinating. It will also be of interest to check for horizontal transfers of genes from their parasites, that have been shown to impact many lepidopteran genomes,.The genome report of the Lepidoptera \u00a0, which is probably one of the reasons why pheromone lures have been developed. On the other hand, the adult has probably a beneficial role as pollinator, as most Lepidoptera, as suggested by many pictures showing them foraging on flowers, but I did not find any reference on this potential role. There might be also sexual dimorphism interesting to mention but again I did not find a reference on that topic. By searching on the internet to find information potentially missing I did not find much to add: the scientific literature is scarce on this species. The only criticism I can provide is on the rather obscure sentence on whether the species distribution may have decreased in the UK since 1970 since as stated the sampling method now using pheromones is completely different from that used for old records. The reference cited for this sentence is not \"open access\" which prevent a better understanding of this point. Concerning the pest status one can also mention that it appears to be a pest also for stone fruits in Isra\u00eblAre sufficient details of methods and materials provided to allow replication by others?YesIs the rationale for creating the dataset(s) clearly described?YesAre the datasets clearly presented in a useable and accessible format?YesAre the protocols appropriate and is the work technically sound?YesReviewer Expertise:wasp genomics, insect viruses, horizontal transferI confirm that I have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard."} +{"text": "Harmothoe impar; Annelida; Polychaeta; Phyllodocida; Polynoidae). The genome sequence is 1,512.3 megabases in span. Most of the assembly is scaffolded into 18 chromosomal pseudomolecules. The mitochondrial genome has also been assembled and is 15.37 kilobases in length.We present a genome assembly from an individual scale worm, Harmothoe;Harmothoe impar (NCBI:txid46595).Eukaryota; Metazoa; Eumetazoa; Bilateria; Protostomia; Spiralia; Lophotrochozoa; Annelida; Polychaeta; Errantia; Phyllodocida; Polynoidae;Harmothoe impar is often found under stones and inside kelp holdfasts, both intertidally and down to depths of 45 m. As with other polynoids, it is predatory, consuming small crustaceans and other small invertebrates, as well as being aggressive to the extent of cannibalisation to other members of the same species , using the metazoa_odb10 reference set (n = 954).The estimated Quality Value (QV) of the final assembly is 58.6 withhttps://links.tol.sanger.ac.uk/species/46595.Metadata for specimens, spectral estimates, sequencing runs, contaminants and pre-curation assembly statistics can be found atHarmothoe impar specimens were collected from Batten Bay South, Mount Batten, Devon, UK on 2021-03-03. The specimens were taken by hand from underneath cobbles by Patrick Adkins and Rob Mrowicki . Patrick Adkins and Joanna Harley identified the specimens, which were then preserved in liquid nitrogen.DNA was extracted at the Tree of Life laboratory, Wellcome Sanger Institute (WSI). A sample of anterior body taken from specimen number wpHarImpa5 was weighed and dissected on dry ice with tissue set aside for Hi-C sequencing. Anterior body tissue was cryogenically disrupted to a fine powder using a Covaris cryoPREP Automated Dry Pulveriser, receiving multiple impacts. High molecular weight (HMW) DNA was extracted using the Qiagen MagAttract HMW DNA extraction kit. HMW DNA was sheared into an average fragment size of 12\u201320 kb in a Megaruptor 3 system with speed setting 30. Sheared DNA was purified by solid-phase reversible immobilisation using AMPure PB beads with a 1.8X ratio of beads to sample to remove the shorter fragments and concentrate the DNA sample. The concentration of the sheared and purified DNA was assessed using a Nanodrop spectrophotometer and Qubit Fluorometer and Qubit dsDNA High Sensitivity Assay kit. Fragment size distribution was evaluated by running the sample on the FemtoPulse system.RNA was extracted from anterior body tissue of wpHarImpa6 in the Tree of Life Laboratory at the WSI using TRIzol, according to the manufacturer\u2019s instructions. RNA was eluted in 50 \u03bcl RNAse-free water and its concentration assessed using a Nanodrop spectrophotometer and Qubit Fluorometer using the Qubit RNA Broad-Range (BR) Assay kit. Analysis of the integrity of the RNA was done using Agilent RNA 6000 Pico Kit and Eukaryotic Total RNA assay.Pacific Biosciences HiFi circular consensus DNA sequencing libraries were constructed according to the manufacturers\u2019 instructions. Poly(A) RNA-Seq libraries were constructed using the NEB Ultra II RNA Library Prep kit. DNA and RNA sequencing were performed by the Scientific Operations core at the WSI on Pacific Biosciences SEQUEL II (HiFi) and Illumina NovaSeq 6000 (RNA-Seq) instruments. Hi-C data were also generated from specimen wpHarImpa3 using the Arimav2 kit and sequenced on the Illumina NovaSeq 6000 instrument.Assembly was carried out with Hifiasm . The mitk-mer completeness and QV consensus quality values were calculated in Merqury Each transfer of samples is further undertaken according to a Research Collaboration Agreement or Material Transfer Agreement entered into by the Darwin Tree of Life Partner, Genome Research Limited (operating as the Wellcome Sanger Institute), and in some circumstances other Darwin Tree of Life collaborators. This genome note reports the assembly and annotation of the genome of the polynoid annelid worm Harmothoe impar, a typical dweller of the UK coasts. Within annelids, the Errantia clade is under-represented in genomic efforts. Therefore, the sequencing of a polynoid will benefit the study and conservation of this central member of the marine macro-fauna of the UK, as well as evolutionary and comparative genomic analyses. The methods are gold-standard, and the reported assembly is consistent with the high-quality datasets produced by the DToL.Are sufficient details of methods and materials provided to allow replication by others?YesIs the rationale for creating the dataset(s) clearly described?YesAre the datasets clearly presented in a useable and accessible format?YesAre the protocols appropriate and is the work technically sound?YesReviewer Expertise:EvoDevo, comparative genomics, annelidsI confirm that I have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard. This data paper reports the sequencing and assembly of the first species of Polynoidae available to date (out of ~1000 described species worldwide). It belongs to a typically shallow-water genus of the family, which comprises ~150 species. In European waters, over 50 species are known in the family. Other closely-related families are present and are also considered for sequencing by DTOL initiative.\u00a0 This resource has already been used by a research team which studied adaptation in a deep-sea hydrothermal species from the Indian Ocean .Are sufficient details of methods and materials provided to allow replication by others?YesIs the rationale for creating the dataset(s) clearly described?YesAre the datasets clearly presented in a useable and accessible format?YesAre the protocols appropriate and is the work technically sound?YesReviewer Expertise:Evolution of adaptations to extreme environments, taxonomy of annelids.I confirm that I have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard."} +{"text": "Anopheles moucheti , from a wild population in Cameroon. The genome sequence is 271 megabases in span. The majority of the assembly is scaffolded into three chromosomal pseudomolecules with the X sex chromosome assembled. The complete mitochondrial genome was also assembled and is 15.5 kilobases in length.We present a genome assembly from an individual male Anopheles moucheti; Evans, 1925 (NCBI txid:186751).Animalia; Arthropoda; Insecta; Diptera; Culicidae; Anophelinae; Anopheles;Anopheles moucheti moucheti (hereafterAn. moucheti) has the greatest geographical distribution within theMoucheti group, which includesAn. bervoetsi andAn. nigeriensis. These species can only be distinguished by slight morphological characters and / or using molecular tools.An. moucheti is widely distributed across forested areas of West and Central Africa, and the persistence of this species is linked to its ability to lay eggs in lentic streams and rivers where its larvae develop preferentially and predominate over other malaria vectors.An. moucheti females usually feed indoors and exhibit a preference for feeding on humans although the species can also be found in forested areas of Gabon, away from any human presence.An. moucheti is among the most important vectors of human malaria in the equatorial rain forest of Africa. It is present in a large number of countries, from Guinea to Kenya, and it has been found naturally infected withPlasmodium parasites across Central African countries, sustaining year-round transmission. For instance, it can show infective biting rates up to 300 bites per person per year in villages located along slow moving rivers. Moreover,An. moucheti has been recently incriminated for contributing to the origin of malaria in humans, being a potential bridge vector for malaria parasites from non-human primates to humans.An. moucheti has been focused on differentiation between members of the group. To this end, molecular markers have been developed that are based on the sequence variation in the ribosomal internal transcribed spacer ITS1 region. Microsatellite loci have also been developed showing a low level of genetic differentiation among populations in Cameroon. Several polymorphic inversions have been detected inAn. moucheti, although their role in local adaptation or speciation is not yet understood. Therefore, genomic characterization of this neglected but major malaria vector,An. moucheti, is critical for developing sustainable control strategies for malaria elimination in Central Africa.The main genetic work carried out onAnopheles moucheti, was sequenced as part of the Anopheles Reference Genomes Project (PRJEB51690). Here we present a chromosomally complete genome sequence forAnopheles moucheti, based on a single male specimen from Ebogo, Cameroon.The genome of the African malaria mosquito,Anopheles moucheti collected from Ebogo, Cameroon . A total of 41-fold coverage in Pacific Biosciences single-molecule HiFi long reads (N50 13.064 kb) were generated. Primary assembly contigs were scaffolded with chromosome conformation Hi-C data from a sibling male mosquito. Manual assembly curation corrected 48 missing joins or misjoins and removed 3 haplotypic duplications, reducing the scaffold number by 7.8% and reducing assembly size by 1.8%.The genome was sequenced from a single maleAn. gambiae PEST strain assembly AgamP3 by Sandrine Eveline Nsango. A single male idAnoMoucSN-F20_07 was used for Pacific BioSciences, a single sibling male idAnoMoucSN-F20_09 was used for Arima Hi-C. and then extracted using the Qiagen MagAttract HMW DNA extraction kit with two minor modifications including halving volumes recommended by the manufacturer due to small sample size and running two elution steps of 100 \u03bcl each to increase DNA yield. The quality of the DNA was evaluated using an Agilent FemtoPulse to ensure that most DNA molecules were larger than 30 kb, and preferably > 100 kb. In general, single mosquito extractions ranged in total estimated DNA yield from 200 ng to 800 ng, with an average yield of 500 ng. Low molecular weight DNA was removed using 0.8X AMpure XP purification. DNA was sheared to an average fragment size of 12\u201320 kb using a Diagenode Megaruptor 3 at speeds ranging from 27 to 30. Sheared DNA was purified using AMPure PB beads with a 1.8X ratio of beads to sample to remove the shorter fragments and concentrate the DNA sample. The concentration and quality of the sheared and purified DNA was assessed using a Nanodrop spectrophotometer and Qubit Fluorometer with the Qubit dsDNA High Sensitivity Assay kit. Fragment size distribution was evaluated by running the sheared and cleaned sample on the FemtoPulse system once more. The median DNA fragment size forAnopheles mosquitoes was 15 kb and the median yield of sheared DNA was 200 ng, with samples typically losing about 50% of the original estimated DNA quantity through the process of shearing and purification.For high molecular weight (HMW) DNA extraction one whole insect (idAnoMoucSN-F20_07) was disrupted by manual grinding with a blue plastic pestle in Qiagen MagAttract lysis bufferFor Hi-C data generation, a sibling male mosquito specimen (idAnoMoucSN-F20_09) was used as input material for the Arima V2 Kit according to the manufacturer\u2019s instructions for animal tissue. This approach of using a sibling was followed in order to enable all material from a single specimen to contribute to the PacBio data generation given we were not always able to meet the minimum suggested guidance of starting with > 300 ng of HMW DNA from a specimen. Samples proceeded to the Illumina library prep stage even if they were suboptimal (too little tissue) going into the Arima reaction. in due course, RNA was extracted from separate whole unrelated lab-reared male and wild-caught female insect specimens using TRIzol, according to the manufacturer\u2019s instructions. RNA was then eluted in 50 \u03bcl RNAse-free water, and its concentration was assessed using a Nanodrop spectrophotometer and Qubit Fluorometer using the Qubit RNA Broad-Range (BR) Assay kit. Analysis of the integrity of the RNA was done using Agilent RNA 6000 Pico Kit and Eukaryotic Total RNA assay. Samples were not always ideally preserved for RNA, so qualities varied but all were sequenced anyway.To assist with annotation, which will be made available through VectorBaseWe prepared libraries as per the PacBio procedure and checklist for SMRTbell Libraries using Express TPK 2.0 with low DNA input. Every library was barcoded to support multiplexing. Final library yields ranged from 20 ng to 100 ng, representing only about 25% of the input sheared DNA. Libraries from two specimens were typically multiplexed on a single 8M SMRT Cell. Sequencing complexes were made using Sequencing Primer v4 and DNA Polymerase v2.0. Sequencing was carried out on the Sequel II system with 24-hour run time and 2-hour pre-extension. For Hi-C data generation, following the Arima Hi-C V2 reaction, samples were processed through Library Preparation using a NEB Next Ultra II DNA Library Prep Kit and sequenced aiming for 100x depth. RNA libraries were created using the directional NEB Ultra II stranded kit. Sequencing was performed by the Scientific Operations core at the Wellcome Sanger Institute on Pacific Biosciences SEQUEL II (HiFi), Illumina NovaSeq 6000 (10X and Hi-C), or Illumina HiSeq 4000 (RNAseq) instruments.; haplotypic duplications were identified and removed with purge_dups. The assembly was then scaffolded with Hi-C data using SALSA2. The assembly was checked for contamination as described previously. Manual curation was performed using gEVAL, HiGlass and Pretext. The mitochondrial genome was assembled using MitoHiFi, which performs annotation using MitoFinder. The genome was analysed and BUSCO scores were generated within the BlobToolKit environment. Synteny analysis was performed with D-GENIES and minimap2.Assembly was carried out with HifiasmThe genetic resources accessed and utilised under this project were done so in accordance with the UK ABS legislation (Nagoya Protocol (Compliance) (Amendment) (EU Exit) Regulations 2018 (SI 2018/1393)) and the national ABS legislation within the country of origin, where applicable. et al. present the methods and tools used for generating the firstAnopheles moucheti genome assembly. ThoughAn. moucheti is an important vector for human malaria, preferentially feeding indoor and on humans, and potentially acting as a bridge vector from non-human primates to humans, the authors explain genomic characterization of the species remains limited. Here, they used an individual maleAn. moucheti reared from a wild-caught gravid female for PacBio sequencing and a single sibling male for Arima Hi-C to construct three chromosomal pseudomolecules, the X sex chromosome, and the mitochondrial genome. They describe their methods from sample acquisition through genome assembly in appropriate detail to be replicated by others.This data note by Nsango In Figure 3, the figure caption may be more descriptive if labeled \u201ccumulative scaffold length\u201d instead of \u201ccumulative chromosome length\u201d as only the dark blue \u201cArthropoda\u201d line represents the chromosomes. Additionally, the figure legend truncates the \u201cno-hit\u201d line at around 225, which can be confusing as it is expected to end around 340. Moving the figure legend would be ideal. If this is not possible using BlobToolKit, setting the curve origin to \u201cX\u201d could be an improvement, as the \u201cArthropoda\u201d line will originate where the \u201cno-hit\u201d line ends.Are sufficient details of methods and materials provided to allow replication by others?YesIs the rationale for creating the dataset(s) clearly described?YesAre the datasets clearly presented in a useable and accessible format?YesAre the protocols appropriate and is the work technically sound?YesReviewer Expertise:Vector Biology, GenomicsWe confirm that we have read this submission and believe that we have an appropriate level of expertise to confirm that it is of an acceptable scientific standard. Anopheles moucheti. The genome was sequenced using the PacBio HiFi procedure and scaffolded into three chromosomal pseudomolecules using Hi-C data obtained from a single sibling mosquito male. The complete mitochondrial genome was also assembled. The rationale for creating theAnopheles moucheti genome is clearly described as the species is an antropophilic malaria vector and a potential bridge\u00a0vector for malaria parasites from non-human primates to humans. The manuscript provides sufficient details of methods and materials to allow replication by others.\u00a0The article by Nsango and co-authors present a new genome assembly obtained from an individual male It would be useful if chromosomes or chromosomal arms (if possible) are labeled in Figure 4. Have centromeric regions been captured either by reads or contigs?\u00a0 There are high frequency interaction spots outside the main diagonal in Figure 4. These could be indicative of polymorphic inversions. If indeed, this study assembled both haplotypes, then it is possible to test if they differ in the structural arrangements. This can be the case if the sequenced individual was a heterozygote. Also, building a Hi-C contact map for the second haply could answer this question.Are sufficient details of methods and materials provided to allow replication by others?YesIs the rationale for creating the dataset(s) clearly described?YesAre the datasets clearly presented in a useable and accessible format?PartlyAre the protocols appropriate and is the work technically sound?YesReviewer Expertise:Mosquito genetics, cytogenetics, genomicsI confirm that I have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard." \ No newline at end of file